Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, June 27th, 2013
| Time |
Event |
| 11:30a |
Metacloud Raises $10 Million for Private OpenStack Clouds Metacloud has raised $10 million in a Series A funding round led by Canaan Partners and joined by existing investors Storm Ventures and AME Cloud Ventures. The funding will go towards sustaining the company’s growth, pace of innovation and global expansion.
Enterprises that are looking deeply into OpenStack but feel they might not be quite ready should take note: Metacloud believes it has done all the work for a production-ready private OpenStack cloud, and the company will fully manage it for you. The company provides OpenStack-based private cloud delivered as a service.
“We’ve spent the last two years on product and R&D – getting this software deployment mechanism right hasn’t been trivial,” said Metacloud co-founder and CEO Sean Lynch, referring to the company’s unique capability to update its private OpenStack cloud platform for customers remotely. “We’ve invested less on sales and marketing. Now we want to scale the company.”
Background in Large Infrastructure
The founders of Metacloud have a strong pedigree. “We have a tech team with a really deep bench,” said co-founder and CEO Shaun Lynch, who was previously at Ticketmaster’s infrastructure engineering team, ultimately running global operations for a company with $9 billion in annualized revenue. Co-founder and President Steve Curry was a founding member of the Yahoo! Storage operations team responsible for hundreds of petabytes of online storage, backup data and media management.
“One thing that’s different with our team is the big operations background,” said Lynch. “It’s really important with a productized offering like this.”
“In addition to a transformative delivery model, Metacloud defines the gold standard of what we look for in a founding team,” said General Partner Maha Ibrahim of Canaan Partners. “With large scale infrastructure engineering experience gained during their years at Ticketmaster and Yahoo!, Sean Lynch and Steve Curry have the pedigree to deliver on Metacloud’s promise of bringing the ease of the public cloud to the private cloud environment.”
The company charges per socket. In addition to targeting enterprise customers, it says it can help service providers launch solid cloud offerings – in fact, it already has with a few undisclosed customers. The company says the third type of customer it is attracting is “large Amazon EC2 customers that have just grown up,” according to Curry. | | 12:43p |
RackWare Raises $3 Million for Cloud and Data Center Management RackWare, a company that integrates data center and cloud resources into one scalable managed computing environment, has raised $3 million in Series A funding. The round was led by Kickstart Seed Fund and Osage Partners. The money will go towards beefing up RackWare’s sales and marketing organizations.
“We’re in a unique position for a Series A company in that we already have customers and revenue,” said Sash Sunkara, CEO of RackWare. “We’ll use this funding to grow our sales and marketing teams so we can attack the market with our production-ready Generation 2 product.”
RackWare’s RMM 2.0 software dynamically integrates existing data centers and cloud applications into a seamless, intelligent, and automated cloud computing fabric that simplifies administration and reduces costs while maximizing flexibility and resilience.
“Cloud management software is a fundamental enabler of enterprise cloud computing, and RackWare has a simplified approach to cloud management that enables rapid and efficient cloud migration and automatically optimized operations,” said Gavin Christensen, managing director of Kickstart Seed Fund. “We are excited to be working with the most dynamic company in this market space.”
Kickstart Seed Fund managing director Gavin Christensen will join RackWare’s board of directors, as will Osage Partners managing partner Nate Lentz.
“RackWare’s cloud management software fits perfectly with our focus on enterprise software,” said Nate Lentz, managing partner at Osage Partners. “This is a new market with only a few real players, and RackWare is in a good position to claim a commanding share of it.” | | 1:04p |
Debunking the Myths and Fears of Converged Infrastructure Deepak Kanwar is a senior manager at Zenoss.
 DEEPAK KANWAR
Zenoss
As the data center becomes more complex, converged infrastructure (CI) can help simplify the setup and management by minimizing compatibility issues. There are many benefits for converged infrastructure like proven design and speed of deployment; however, change can create a cause for concern within an organization.
Four common concerns that are often brought up when migrating to converged infrastructure are: running out of shared resources, poor customer support, unreliable service, and new problems arising with the constantly changing nature of converged infrastructure. Below, I will provide details about these four concerns and how you can both solve and help dispel them in your company.
Running out of Shared Resources
With converged infrastructure, there is only one shared pool of resources. This can be disconcerting for software developers who are worried that someone else’s application will starve theirs for resources.
This risk can be easily mitigated through the use of a tool that can monitor applications and their needed resources. Be sure to baseline applications before you implement converged infrastructure to set users’ expectations. Continue to monitor application behavior within converged infrastructure, so you can spot issues before they become major problems.
Realizing private cloud capabilities on top of a CI stack allows you to monitor and meter utilization rates for various departments. Understanding the patterns of business activities across different departments allows you to efficiently and dynamically manage resources ensuring appropriate resources for all of your “tenants.” Dialing up and dialing down capacity is easily accomplished within the private cloud environment.
Poor Customer Support
One common concern around converged infrastructure is that customer support will be poor because your IT team is no longer in charge of the hardware. When something fails, whom would you turn to?
One of the greatest advantages of CI is the “single throat to choke.” Since the entire stack is built and tested by a single provider, you often have a single support line to reach out to in case of an issue instead of three or more directions that you might have to turn to in a traditional in-house infrastructure.
Major providers such as HP, IBM, and Dell all include their own components within their stacks, while VCE, a joint venture between Cisco and EMC offering converged cloud infrastructure, utilizes components from those market leaders. Either way, the CI stacks are built upon components that these companies are intimately familiar with and are readily available for testing compatibility and interoperability. Per a recent survey conducted by Zenoss, three out of four converged infrastructure adopters actually saw improvement in their customer service.
Unreliable Services
Cloud computing has earned a reputation for being less than reliable, making workers hesitant to use it. However, cloud computing based on converged infrastructure has proven to be extremely reliable, because of the reasons discussed above. In fact, according to IDC, VCE customers have, on average, 0.5 infrastructure incidents a year, leading to 83x better availability. Converged infrastructure products and services are tested together before going to market, making for less interoperability and configuration problems once a system is set up and offers better service availability.
Change Can Create Problems
By design, application-IT asset relationships within converged infrastructures are constantly changing. This can be frightening to workers, especially as troubleshooting can be complex. Having a unified monitoring solution in place, that not only provides you with the health of the components but also of the service, allows you to address these concerns. IT departments must grow and adapt to the constantly changing environment for better, faster tools and services to stay ahead of the competition. The converged infrastructure adds flexible data center capacity, perfect for growing companies to rapidly launch cool new applications to stay ahead of the game.
Proving to be a Viable Alternative
Converged Infrastructure brings a whole new paradigm to a data center. No longer is there a need for highly specialized architects to design and build a solution that supports the needs of the organization. Instead of spending your best IT resources on building infrastructure, you allow them to build value for your company and your customers. And by running your exciting new applications on this pre-built, highly reliable infrastructure you dramatically reduce the time to market for your new capabilities. It is a win-win situation.
Notwithstanding the initial reluctance to adopt converged infrastructure, the market is growing. According to a recent IDC study, spending for the converged infrastructure will hit $17.8 billion in 2016. There are plenty of surveys and reports out there that can also supplement the argument to get converged infrastructure, hopefully these explanations helped you too.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 2:40p |
Networking: Upgrading from 10GB/s to 40GB/s and Beyond As more demands have been placed on the data center, administrators have turned to fiber solutions to help them obtain the type of LAN bandwidth that they require. In some heavy utilization instances, 10GB/s is just not enough. This is where administrators may run into the challenge of upgrading from 10GB/s to 40GB/s and beyond.
CommScope offers a variety of pre-terminated fiber solutions that utilize multi-fiber connectors to facilitate rapid deployment of fiber networks in data centers and other high-density environments. Within the SYSTIMAX brand, these solutions are InstaPATCH 360 and InstaPATCH Plus. The Uniprise solution is referred to as ReadyPATCH. In this white paper CommScope guides the conversation around the mechanisms required to upgrade from a 10GB/s infrastructure to 40GB/s and even further if needed.
With detailed drawings and descriptions, the white paper outlines various fiber deployment methodologies including the following:
Traditional two-fiber application channel with InstaPATCH 360.

Two-fiber fan-out channel with InstaPATCH 360 fan-out cables.

Optimized parallel transmission channel with InstaPATCH Plus/ReadyPATCH

Download this detailed white paper to see how CommScope can help create a more robust network infrastructure by simplifying wiring and increasing bandwidth throughput. According to CommScope, for Ethernet networking speeds above 10GB/s, the applications standards are specifying parallel optics for multimode fiber networks. IEEE 802.3ba defines the transmission schemes for 40GB/s and 100GB/s. The interfaces for these higher speeds are based on the MPO connector. As such, it is a relatively simple process to upgrade a CommScope pre-terminated solution from 10GB/s to 40GB/s or even 100GB/s. | | 2:45p |
Hortonworks Supercharges Hadoop At the Hadoop Summit North America Wednesday in San Jose, Hortonworks announced the availability of the Hortonworks Data Platform (HDP) 2.0 Community Preview and the launch of the Hortonworks Certification Program for Apache Hadoop YARN to accelerate the availability of YARN-based partner solutions.
YARN is a sub-project of Hadoop at the Apache Software Foundation that separates the resource management and processing components. The YARN-based architecture of Hadoop 2.0 provides a more general processing platform that is not constrained to MapReduce.
Culmination of Years of Work
“Hortonworks Data Platform 2.0 marks a truly pivotal moment for Apache Hadoop and represents the culmination of many years of hard engineering work,” said Arun Murthy, founder, Hortonworks. “The power of YARN to enable applications to run ‘in’ Hadoop, instead of ‘on’ Hadoop, is the key to leveraging all other common services of the next-generation data platform, from security to data lifecycle management. We look forward to growing HDP 2.0 as the 100-percent open source Hadoop distribution of choice for enterprises and an IT ecosystem dedicated to making the next-generation platform a core component of the enterprise data architecture.”
HDP 2.0 marks a significant step forward in the Apache Hadoop architecture, and the community preview is the first distribution to include the upcoming beta release of Apache YARN that enables a wide range of data processing applications to run natively in Hadoop with predictable performance and quality of service. More than four years in the making, YARN takes Hadoop beyond batch and the restraints of a single application and single-use to enable organizations to store large amounts of data in Hadoop and then interact with it in multiple ways from batch, interactive, streaming and more – and all with a consistent level and quality of service.
Certification Program for Developers
The Hortonworks Certification Program for Apache Hadoop YARN is designed to support the Apache Hadoop ecosystem behind this next-generation architecture by helping application developers build and certify their applications to use the YARN architecture of Hadoop 2.0. Participants in this program have been instrumental in the testing and delivery of this new framework.
“TIBCO is investing heavily in the area of big data and analytics,” said Tom Laffey, executive vice president, products and technology, TIBCO Software. “As part of this program we are integrating several of our strategic products with Apache Hadoop 2.0. We are pleased to be a part of the YARN 2.0 Certification Program and look forward to helping bridge the gap between data at rest and data in motion.”
“YARN is the future of Hadoop,” said Mark Terenzoni, CEO, Sqrrl. “We are excited to begin work with HDP 2.0 and partner with Hortonworks to deliver secure, scalable and real-time applications to our mutual customers. With Sqrrl’s data-centric security and YARN’s ability to support multiple applications simultaneously, customers can easily create secure multi-tenant analytic environments.”
Hortonworks Data Platform 2.0 Community Preview is available immediately as a downloadable single-node instance that runs inside a virtual machine, and also as a complete installation for deployment to distributed infrastructure. | | 7:11p |
What’s the Best Caption? Like everyone else lately, we have been mega-busy at Data Center Knowledge lately. We missed posting the voting on the last cartoon, so we are voting today.
So please scroll below and place your vote for the best caption for the cartoon titled “Hanging Around the Data Center.”
Take Our Poll
For the previous cartoons on DCK, see our Humor Channel. Please visit Diane’s website Kip and Gary for more of her data center humor. | | 8:00p |
Cisco Ushers In Application-Centric Infrastructure  A look at the Cisco Nexus 7700 10-port switch, part of Cisco’s Unified Computing System (UCS) architecture. (Photo: Cisco)
At Cisco Live in Orlando this week Cisco (CSCO) unveiled its data center networking architecture, designed to usher in the era of Application-Centric Infrastructure. The architecture, assisted by Cisco’s majority owned subsidiary Insieme Networks, aims to transform data centers to better address the demands of new and current applications in the Cloud era.
Application Centric Infrastructure
The shift to application-centric infrastructure will provide IT with the ability to quickly deliver business applications to end-users with a simpler operational model, scalable secure infrastructure, and at optimized cost. With this architecture application deployment time is reduced with the help of automated and programmatic network infrastructure, and it delivers a common open platform across physical and virtual applications. It will also feature a holistic simplified approach with the integration of infrastructure, services, and security, coupled with real-time telemetry and extensibility to future services. The architecture will also have a comprehensive set of published, open APIs, and allow customers to migrate to 40G today and 100G in the future – to optimize both capital and operational expenditures.
Dynamic Fabric Automation and Nexus expansion
Cisco announced updates to its Nexus portfolio, evolving the Unified Fabric with simplified provisioning, better management, and new switches. Cisco Prime DCNM 7.0 includes open APIs and centralized fabric management with automated network provisioning, common point of fabric access, and host, network and tenant visibility. Optimized spine-leaf topologies with enhanced forwarding, distributed control plane and integrated physical and virtual, and delivers greater resiliency with smaller failure domains and multi-tenant scale of greater than 10,000 tenants/networks.
Cisco launched new Nexus 7700 Series switches and new F3 Series I/O Modules. The new F3 series I/O modules, which are supported on both 7000 and 7700 Series switches, deliver 40G/100G density, improve power efficiency by 60%, and support a broad set of proven Data Center Switching features. The new Nexus 7718 delivers high capacity 40G and 100G switch with up to 384 40-Gbps ports and 192 100-Gbps ports. The Nexus 7718 has been designed to deliver up to 83Tbps of overall switching capacity. | | 8:30p |
CoreSite Now Using FieldView DCIM  The server hall of a data center operated by CoreSite, which has chosen software from FieldView Solutions to support an enhanced management plan for its facilities. (Photo: CoreSite Realty)
Data Center Infrastructure Management player FieldView Solutions has a big win with CoreSite Realty, a company known for its cloud-enabled data centers and the CoreSite Mesh. FieldView will help CoreSite drive data center efficiency with consolidated monitoring of power, cooling, and space inventory.
FieldView’s software will also play a role in a larger technology transformation project underway at CoreSite, which is an attempt to link all of CoreSite’s business applications, including FieldView, into one integrated view.
“FieldView understands the importance of automated monitoring for the most effective DCIM practices,” said Sev Onyshkevych, Chief Marketing Officer, FieldView Solutions. “As a component of CoreSite’s system, our enterprise monitoring solution will drive efficiency across the data center and enable visibility and transparency around power, cooling, and infrastructure.”
Tight Integration for a “Single Pane” View
FieldView developed a custom interface to integrate its software with CoreSite’s existing system. The FieldView component will allow CoreSite to sync data among multiple systems with greater data consistency. The FieldView solution will enable a single-pane, real-time view of constrained resources, including power, cooling, and space allocation, throughout the data center. It will also significantly strengthen CoreSite’s DCIM monitoring system, which will complement the control capabilities of the company’s building management systems (BMSs).
“Data center providers like CoreSite need better tools and automation to support their customers,” said Bill Wosilius, Senior Vice President, Corporate Operations, CoreSite. “Data center infrastructure management is our livelihood, and a world-class customer experience is essential to our growth. Being able to support our customers more efficiently and effectively is a differentiator that enables us to create added value. That’s why our approach to our technology transformation project was to select best-in-class technologies like FieldView and further strengthen our integration.”
CoreSite’s National Platform Growing
CoreSite is looking to create a large national fabric of deeply connected facilities. FieldView adds some much needed insight into the status and health of this infrastructure.The company has been aggressively building out data center campuses across America. Focusing on network centric and cloud oriented applications, these data center campuses are network-dense.
Most recently, the company announced a facility in Secaucus, a 280,000 square foot building with a first phase of 65,000 square feet. The Secaucus facility follows the launch of CoreSite’s previously announced 15th data center, located in Reston, Virginia. CoreSite’s national platform, which spans nine U.S. markets and includes more than 275 carriers and service providers and more than 15,000 interconnections. | | 9:00p |
Cequel Acquires Baltimore Technology Park  Some of the cabinets inside the Baltimore Technology Park, a data center in downtown Baltimore that has been acquired by Cequel Data Centers. (Photo: Baltimore Technology Park)
Cequel Data Centers continues to grow through acquisitions. This week the company extended its growing data center footprint to the East Coast with the acquisition of Baltimore Technology Park, a colocation provider serving the Mid-Atlantic region. The deal is Cequel’s fifth acquisition since 2010, as it builds its Tierpoint brand of colo, cloud and disaster recovery services. Financial terms of the transaction were not disclosed.
Baltimore Technology Park currently operates 11,000 square feet of data-center space in downtown Baltimore with plans to add 3,000 square feet of raised floor space to meet strong demand and satisfy existing customers looking to expand in that market. BTP will operate under the Tierpoint brand going forward.
“This acquisition enhances our geographic diversity in a key East Coast market as we continue to respond to the growing national demand for colocation and cloud computing services,” said Paul Estes, Chairman and Chief Executive Officer of Cequel Data Centers. “Baltimore is a dynamic business community and the data center team brings expertise and experience serving enterprise customers. As we have done elsewhere, we plan to continue to grow and expand our facility in Baltimore.”
Cequel Data Centers now owns and operates nearly 130,000 square feet of raised-floor, data-center space in Dallas, Spokane, Seattle, Oklahoma City and Tulsa. Here’s a look at its major acquisitions:
- In July 2010 Cequel launched with the acquisition of Colo4Dallas, which operates a 68,000 square foot data center in the Dallas market.
- In 2011, Cequel bought Perimeter Technologies, which operates data centers in Tulsa and Oklahoma City with a total of 22,000 feet of space.
- In May 2012 Cequel acquired Tierpoint, a regional provider of colocation and cloud services in the Pacific Northwest. Tierpoint operates three facilities in the Spokane, Washington market.
- In December Cequel continued its expansion in the Pacific Northwest with the purchase of AdHost, a hosting and colocation provider with a facility at Fisher Plaza in Seattle.
Cequel Data Centers was established by Cequel III, a St. Louis investment and management firm that has focused on the cable and telecom sectors. Cequel is partnering with two private equity firms, Thompson Street Capital Partners of St. Louis and Charterhouse Group of New York.
RBC Capital Markets acted as financial advisor to Baltimore Technology Park and was the sole lead arranger and sole bookrunner for the debt financing associated with this acquisition. U.S. Bank and ING Capital acted as co-syndication agents, and CapitalSource Bank acted as documentation agent. Goldman Sachs, CoBank, Raymond James, CIT and Brown Brothers also participated in the credit facility. Paul Hastings acted as counsel for Cequel Data Centers for the acquisition and financing. |
|