Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, February 24th, 2014
Time |
Event |
1:00p |
Metacloud Offers Hosted Version Of Its Private OpenStack Cloud Seeking to bring OpenStack to the enterprise, cloud services company Metacloud is now offering a hosted version of its OpenStack private cloud offering. Initially, the company started with running and managing production-ready private OpenStack clouds on-premises. Now it’s bringing a hosted solution to market.
“It’s completely hosted. Customers don’t have to worry about the hardware,” said founder and president Steve Curry. “They can turn it on through us. We’re trying to make a measured step. We’re in a really good spot and I’m happy to say that we’ve executed really well so far.” Curry expects the hosted version to grow and eventually overtake the managed, on-premise solution.
And Metacloud is seeing growth, having closed more deals in the last three months than they did in all of 2013. Three-year-old recently completed a $10 million Series A round of funding in June 2013.
So far the company has been leveraging Internap exclusively as its infrastructure provider. Internap’s API-driven bare metal provisioning made it a perfect fit for Metacloud’s needs for its own version of OpenStack, CarbonOS, which it is running in a hosting environment.
The hosted version was driven by customer demand. The company seeks to reduce the hurdles customers have when looking to deploy private OpenStack clouds. Metacloud works with a few industry verticals, including telecos, as we as Fortune 5000 business. Another customer who could leverage is the hosted service is the client who began on public cloud services like Amazon Web Services, but found it was too expensive. Also, clients who don’t have the proper infrastructure or in-house talent to run private cloud might be a good fit for Metacloud’s hosted service.
Curry noted that a few customers were looking to get out of the public cloud due to cost. The company claims its solution is around 35 to 40 percent cheaper on average, plus it comes with the advantages of a private solution such as reliability and security. Curry also noted that some of the company’s largest customers are looking at the hosted version for bursting capacity, during seasonal fluctuations, as well as for some of the applications and data they are not concerned about keeping within their own borders.
The founders of Pasadena-based Metacloud have a strong pedigree. Co-founder CEO Shaun Lynch was previously at Ticketmaster’s infrastructure engineering team, ultimately running global operations for a company with $9 billion in annualized revenue. Co-founder and President Steve Curry was a founding member of the Yahoo! Storage operations team, responsible for hundreds of petabytes of online storage, backup data and media management. | 1:30p |
5 Major Drivers of OpEx in the Enterprise Network Michael Bushong is the vice president of marketing at Plexxi. This column is part one of a two-part series looking at the costs factors and controlling costs in your networking infrastructure.
 MICHAEL BUSHONG
Plexxi
Capital costs account for nearly two-thirds of the purchasing decision for networking equipment, according to research by IDC. But over the life of the gear, the total cost of ownership (TCO) is dominated by ongoing operational costs – both administration and maintenance of the device. Many TCO models exhaustively look at all sources of expense, but it’s also important to note the key drivers behind OpEx.
When looking to evaluate your budget for equipment, consider these five major drivers of OpEx in your network infrastructure:
1. Number of Devices Under Management
The largest primary driver of cost is the total number of devices under management. Each device will drive space costs based on its size and the price-per rack-unit for a particular environment. Similarly, each device will contribute to power and cooling costs.
The total number of devices also impacts ongoing operational costs. For example, the total number of spare chassis and line cards required on-hand will scale linearly with the number of devices deployed, as will the carrying costs for these spares. These carrying costs will increase with the number of different platforms within an architecture, as each platform family requires its own spares.
Additionally, administrative overhead has a correlation with the number of devices. Each device represents an element that must be ultimately provisioned, monitored, troubleshot, audited, and secured. To some extent, these management costs can be partially offset by provisioning tools (as with DevOps-type tools), automation frameworks, and network controllers that reduce the total number of administrative touch points.
Beyond the easily quantifiable drivers, there is an overarching complexity contribution to ongoing costs. It is impossible to model, but complexity is positively correlated with the number of devices under management. As the number increases, so too will complexity – along with the costs required to manage it.
2. Number of Ports Under Management
While the number of devices is a good proxy for environmental costs and administrative overhead, it is the number of ports under management that drives cabling and provisioning costs. The most basic cost tied to ports is the physical cabling required to interconnect the ports. For architectures that utilize many fabric ports, this additional cabling provides connectivity through the fabric but does not increase the total number of servers attached to the network.
How networking gear is cabled also impacts long-term operational costs. In some architectures, the interconnect ports are taken from the same pool as the server ports. For every interconnect port, the server capacity of the switch is reduced by one. For large, distributed architectures that require non-blocking paths through a core switch layer, this can represent a significant percentage of available server facing ports. The result is a larger number of devices to meet overall server port requirements.
Beyond the cabling and power costs, the total number of ports under management servers as a reasonable proxy for provisioning, monitoring, and maintenance costs. Each port represents another entity that must be managed.
3. Number of Administrative Touch Points
While the environmental costs will grow linearly with the number of devices, the ongoing operational costs can be mitigated somewhat by reducing the number of administrative touch points in the network.
To some extent, the rise of software-defined networking is a response to the rising operational costs tied to network growth. Controller-based architectures are designed to provide central points of control through which entire networks can be managed, reducing the number of administrative touch points in the network.
By providing a single point from which all devices can be provisioned, monitored, and troubleshot, the overall effort required to do so is greatly reduced. This has the added benefit of driving down human error – the single largest source of network downtime in most networks.
A single point of administration also lends itself well to providing better network visibility. By collecting distributed data and presenting it from a single point, the network is better documented, making troubleshooting tasks shorter and more straightforward. This ultimately impacts metrics like Mean Time to Insight (the time it takes to correctly diagnose and triage new issues) and Mean Time to Repair.
4. Number of Integration Points
Capability in isolation is useless. Ultimately, data center networking gear must be integrated with surrounding infrastructure to provide any real value. That surrounding infrastructure certainly includes other networking devices, but integration extends well beyond network interoperability.
Network infrastructure must be integrated with surrounding compute, storage, and application components. Each integration requires time and money. Accordingly, the number of points at which these integrations must be executed will be a cost driver. For architectures requiring device-by-device integration, costs can be high. Those that handle integration through central points will contribute lower cost. These costs are incurred at both the time of integration as well as at any subsequent change.
Beyond the sheer number, most of these integrations require some exchange of data. If each supporting tool is responsible for harvesting information separately, the effort to integrate will be higher. To the extent that architectures can provide a common means of extracting data from the system, these costs can be lowered.
5. Number of Management Models
Beyond just the number of devices that must be managed, the number of disparate ways in which those devices are managed is also a cost driver. Where architectures are standardized around a single device type or family of devices, there is typically one management model. The single operating system environment lends itself well to developing and leveraging a single set of training materials, provisioning models and templates, standard operating procedures, and supporting processes like auditing and change management.
Costs associated with these types of tasks will tend to scale linearly with the number of management models within a data center. Companies that reduce management complexity will see reduced ongoing operational costs.
Controlling these cost drivers should be a primary objective when designing all data centers. While specific device characteristics and capabilities can mitigate costs to some extent, the most significant contributor to ongoing operational costs is the underlying data center architecture. Accordingly, data center architects should consider the long-term cost impacts of architectural designs.
In part two of this series, I will examine specific ways you can plan and architect your network to control costs.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| 2:55p |
Inside LINX: Key Beachhead for the Euro Network Exchange Model  A piece of equipment similar to this one operates the new LINX PoP at the Evoswitch data center in northern Virginia. (Photo: LINX NoVa)
MANASSAS, Va. - It’s an unassuming piece of equipment when you first look at it. One cabinet, albeit wider than the rest, sitting inside a data center operated by EvoSwitch. But this piece of hardware is more than just a Point of Presence (PoP) for LINX. It could represent the birth of a new paradigm for U.S. internet exchanges.
LINX is the London Internet Exchange, which is among several companies seeking to bring a European-style interconnection model to the United States. In the European model, Internet traffic exchanges are managed by participants, rather than the colocation providers hosting the infrastructure. LINX NoVA is a member-owned internet exchange that already spans multiple data centers. The first of these is here at EvoSwitch in Manassas, with others soon to follow at two other data centers in northern Virginia – DuPont Fabros ACC5 in Ashburn and a CoreSite facility in Reston.
LINX is one of three European providers that have entered the U.S. market in recent months. The others, Amsterdam’s AMS-IX and Frankfurt-based DE-CIX, have targeted the New York/New Jersey market. They are key players in the Open-IX initiative, which hopes to gradually expand the network interconnection options in major U.S. markets. LINX NoVA has been approved, and EvoSwitch is currently in the process of getting Open-IX certified.
Building the Connectivity Story
EvoSwitch has built its story around connectivity, and was the first company to land a LINX node in the states. It connects to LINX via 24 fiber pairs between the LINX rack and its Meet Me Room, a central point in the data center where customers can create direct physical connections between their networks. LINX NoVa is just one of many interconnection options in the EvoSwitch meet me room, dubbed IXroom.
EvoSwitch’s deep European roots made it an ideal candidate for the first LINX landing spot. The Amsterdam provider first entered the U.S. market less than a year and a half ago, leasing space in the COPT building in Manassas. The facility is outside of the major data center cluster in Ashburn, but the addition of LINX will likely begin to even the playing field for potential customers. It means rich connectivity, and as the Open-IX movement gathers steam and LINX NoVA continues to grow, EvoSwitch stands to benefit. As more and more networks open up the market in Virginia, facilities don’t necessarily have to be clustered around Ashburn to meet connectivity needs.
“Everything around here is so concentrated, it’s almost becoming a single point of failure,” said Vincent Rais, Marketing Manager at EvoSwitch. “The idea is to distribute the exchange. It’s a viable model that serves the community, and makes the internet more resilient.”
A tour of the facility reveals that space is already filling up. “There is great momentum and interest in the local market building up combined with interest in Open-IX and LINX NoVA,” said Rais. The company is approaching capacity of its first phase of buildout at WDC1, and expansion is already underway.
A big portion of EvoSwitch’s base is international – either European companies looking for U.S. infrastructure or the inverse. It has a fair number of system integrator customers, including a new breed of SIs that are helping their customers mix private with public clouds.
“It’s clear that by partnering with LINX, we’re positioning as an Internet-centric company,” said Rais. | 4:00p |
Velocidata Launches Data Sort Product Suite Velocidata launches a new Data Sort Product suite for speedy critical data sort functions, and Western Union selects Cloudera Enterprise to centralize its customer data in an enterprise data hub.
Velocidata announces Data Sort Product Suite. Big data company Velocidata announced the availability of a new product suite that includes improved price/performance for critical data sort functions. The solution leverages a combination of CPU, GPU and FPGA (Field Programmable Gate Array) processors. By dedicating specially configured processor logic to eliminate the bottlenecks that plague conventional approaches to sort execution, VelociData appliances achieve near-wirespeed performance. In one use case with a financial institution VelociData sorted 500 million rows of user data four times faster than the institution’s existing environment. Sort functions play an essential role in data warehousing and analytic environments. VelociData solutions easily snap into existing infrastructures and do not require complex coding skills.
Cloudera selected by Western Union. Cloudera announced that The Western Union Company is working with Cloudera to centralize its global customer data in an enterprise data hub to drive pattern recognition and predictive modeling. To serve its more than 70 million global customers Western Union will use Cloudera Enterprise to more efficiently store, conduct and process real-time analysis on one of the world’s largest enterprise data sets. “It is business-critical for companies to collect, store and analyze their data in order to better serve their customers,” said Mike Olson, chairman, founder and chief strategy officer for Cloudera. ”Western Union’s Hadoop-powered enterprise data hub deployment is precisely the kind of use case that Cloudera is uniquely equipped to address. Through its use of Cloudera Enterprise, the company is utilizing its data to drive new levels of business and customer insight, extending those benefits across its global organization. Cloudera worked closely with Western Union to ensure that they were well equipped to maximize their data analytics deployment, and achieve business critical insights that support the company’s omni-channel evolution.” | 4:43p |
Level 3 and Windows Azure for Enterprise Cloud Services Level 3 Communications (LVLT) has announced a strategic relationship with Microsoft (MSFT) to deliver private, direct network connections to Microsoft Windows Azure as part of the Level 3 Cloud Connect Solutions partner ecosystem. The collaborative effort provides businesses with a secure, high-performance network for global enterprises without compromising performance, productivity or revenue. Enterprises gain the ability to operate a more seamless IT environment between their corporate networks, data centers and the Windows Azure cloud platform.
“Microsoft has unmatched experience running data centers and cloud services at global scale,” said Steven Martin, general manager of Windows Azure at Microsoft. “Private, direct network connections between Level 3 Cloud Connect Solutions and Windows Azure put our combined scale and global reach to work for enterprises, helping them to realize greater efficiencies and focus on their core business.”
Additionally, the Level 3 Cloud Connect Solutions and Windows Azure will include computing and storage optimization, application performance, and improved security. Companies can virtualize network applications across on-premises infrastructure and Windows Azure without compromising security or performance. The joint services support MPLS-based Ethernet Virtual Private Line Service, Virtual Private LAN Service and IP/VPN Service, as well as Level 3′s Security Solutions and Application Performance Management Services.
“Level 3 Cloud Connect Solutions and Windows Azure deliver a fast, reliable and cost-effective path for enterprises to migrate and optimize their cloud strategies,” said Anthony Christie, chief marketing officer of Level 3. “Level 3 is driving a new, more efficient way to connect to the cloud, and our strategic relationship with Windows Azure represents the most recent addition to the global ecosystem we are developing to provide enterprises with greater choice and flexibility to operate within the cloud as a platform for future growth.” | 7:33p |
IBM Acquires Cloudant to Boost Cloud Databases  Some of the thousands of servers inside an IBM SoftLayer data center. IBM today acquired database as a service provider Cloudant. (Photo: SoftLayer)
IBM is acquiring Cloudant, a database as a service (DBaaS) provider that enables developers to create next generation mobile and web applications. Delivered as a managed cloud service, Cloudant technology simplifies database management for app developers. The acquisition sits squarely at the intersection of three important areas for IBM: big data, cloud computing and mobile.
Financial terms of the deal were not disclosed. The acquisition of Cloudant is expected to close in the first quarter of 2014.
“IBM is leading the charge in helping its clients take advantage of big data, cloud and mobile,” said Sean Poulley, vice president, databases and data warehousing, IBM. “Cloudant sits squarely at the nexus of these three key transformational areas and enables clients to rapidly deliver an entirely new level of innovative, engaging and data-rich apps to the marketplace.” Cloudant will become part of IBM’s newly formed Information and Analytics Group.
Complement to SoftLayer
After acquiring SoftLayer as its cloud “crown jewel,” the company has been busy developing and acquiring complementary pieces to the SoftLayer infrastructure foundations. IBM became familiar with Cloudant because they were a SoftLayer customer. Cloudant has customers in gaming, financial services, mobile device manufacturers, online learning, retail and healthcare.
“Cloudant’s decision to join IBM highlights that the next wave of enterprise technology innovation has moved beyond infrastructure and is now happening at the data layer,” said Cloudant CEO Derek Schoettle. “Our relationship with IBM and SoftLayer has evolved significantly in recent years, with more connected devices generating data at an unprecedented rate. Cloudant’s NoSQL expertise, combined with IBM’s enterprise reliability and resources, adds data layer services to the IBM portfolio that others can’t match.”
Cloudant works with IBM’s big data and analytics portfolio by giving clients a tool to simplify and accelerate the development of scalable mobile and web apps. The acquisition also strengthens IBM’s cloud solutions by providing yet another popular developer tool to build, test, deploy and scale cloud applications on a variety of cloud layers. IBM’s MobileFirst solutions stand to gain as Cloudant will be integral, enabling developers who use Worklight, IBM’s mobile app development software, to quickly create scalable apps that include a variety of structured and unstructured data.
IBM Bets Big on Big Data
The acquisition marks continued cloud and big data investment on the part of IBM. The company has heavily invested in big data and analytics both in-house and through acquisition. In addition to organic growth through research and development, the company invested more than $17 billion on more than 30 acquisitions in the space. The resultof this heavy focus in business analytics is now nearly a $16 billion dollar business. The $16 billion figure was originally the target for 2015, and the company has upped projections to $20 billion.
IBM is trying to capitalize on the proliferation of mobile device usage worldwide with the Cloudant acquisition. With five petabytes of data being created every day by mobile phone subscribers around the world, user data must be always available and easily accessed by massive volumes and networks of users and devices and Cloudant helps build these scalable applications.
“IBM has a rich history in the field of data management, and one that will truly differentiate Cloudant’s technology in the marketplace,” said Cloudant CTO and Co-Founder Adam Kocoloski. “Joining IBM allows Cloudant to innovate faster than ever before, and IBM’s track record in open source software gives us complete confidence in our ongoing collaboration with the Apache CouchDB project. Cloudant could not have found a better home than IBM.” The DBaaS company is an active participant and contributor to the open source database community Apache CouchDB, and says it will continue to contribute. | 8:16p |
HP Launches OpenNFV to Speed Deployment of New Services At Mobile World Congress this week in Barcelona, HP (HPQ) introduced OpenNFV, a telecom-focused Network Functions Virtualization (NFV) program designed to help launch new services faster and cheaper. The new program enables communication service providers (CSPs) to accelerate time to market by leveraging commercial off-the-shelf hardware with virtualization to test and deploy new offerings in minutes rather than months.
“NFV represents one of the most significant shifts the telecommunications industry has experienced in 20 years,” said Martin Fink, executive vice president and chief technology officer at HP. “HP’s Open NFV Program combines HP’s technology leadership with a strong partner ecosystem to enable our customers to leverage new market opportunities faster while managing spiraling costs.”
NFV Reference Architecture
To aid the new program HP has appointed Bethany Mayer to assume responsibility for leading HP’s NFV strategy. The companywide effort leverages the breadth and depth of HP’s innovation portfolio to launch an open standards-based NFV Reference architecture, HP OpenNFV Labs, and a partner ecosystem of best-in-class NFV applications and services. HP also brings more than 30 years of telco-specific experience, along with more than 5,000 telco professionals to this new offering, ensuring that carriers have a tried and tested partner supporting them on their NFV journey.
HP’s OpenNFV Reference Architecture (NFV RA), provides a complete architectural ecosystem covering physical servers, storage and networking, virtualization, controllers for software-defined networking (SDN), resource management and orchestration, analytics, telco applications, and a complete operations support system (OSS). It is based on open standards, and brings a set of HP’s industry-standard products and capabilities to easily build and deploy the architecture. The HP NFV RA incorporates the HP Virtual Services Router, which is designed to support various virtualized appliances such as multitenant hosted public clouds and virtualized branch Customer Premise Equipment deployments.
Driving Better Agility
“In the environments we work in, it’s key that we have agility and flexibility in what we offer our customers,” said Roy Kaser, chief technology officer and vice president, IP Platforms, Alcatel Lucent. “We partnered with HP because they are a leader in both open systems and in virtualization, which is an important strategy for us going forward. We can rely on HP for their expertise in NFV and in x86 technology, which enables us to free up R&D investment dollars to focus more on innovative software technology.”
Several updated telco applications and offerings are being launched as part of the HP OpenNFV Program, including HP Virtual Home Subscriber Server, HP Multimedia Services Environment, HP Virtual Content Delivery Network Software, HP Financial services, and NFV Consulting Services. HP is exhibiting OpenNFV Program solutions with proof-of-concept demonstrations at Mobile World Congress. |
|