Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, August 20th, 2013
| Time |
Event |
| 11:30a |
Cleversafe Raises $55 Million For Storage Disruption at Petabyte Scale  A closer look at several of Cleversafe’s storage servers, the Slicestor 2200 (2U form factor) and Slicestor 1440 (4U form factor).
Cleversafe believes it is disrupting the economics of storage at Petabyte-scale. The company delivers a combination of analytics and storage in a geographically distributed single system, allowing organizations to scale big data environments to hundreds of petabytes, and even Exabytes. The company has grown its customer base by 75 percent in the last 12 months, and announced today that it has secured $55 million in funding to support its growth.
The oversubscribed Series D round was led by New Enterprise Associates (NEA), with participation from all major existing investors as well as new investor New World Ventures. The money will go to support Cleversafe’s growth, as well as help it expand into new vertical market segments. It will also boost product development efforts to deliver solutions in conjunction with leading industry partners to help customers reduce storage costs.
“Cleversafe is way ahead of the innovation curve in storage technology,” said Peter Barris, the Managing General Partner of NEA, who will join Cleversafe’s board. “Today, the company is seeing demand for its cost-efficient storage solution from an entirely new class of customer across a growing set of verticals. This demand will only accelerate with an ever-increasing volume of data being generated, and there is no better time for a company with the right technology to step forward to solve today’s business challenges.”
More Than 200 Petabytes Shipped
The company has so far shipped more than 200 petabytes of storage capacity, which Cleversafe notes is the equivalent of the entire written works of mankind from the beginning of recorded history – in all languages, times four.
Cleversafe appointed John Morris to CEO in May to help guide the company during this next stage of growth.
“There is a fast-growing hunger for efficient, cost-effective solutions to manage massive volumes of data, and Cleversafe’s solution completely disrupts the economics of storage at Petabyte-scale,” said Morris. “Our proven technology is displacing legacy boxes that were designed for yesterday’s storage needs across a wide range of industries. What they have in common is breakneck growth in storage needs that were being poorly addressed by old-fashioned products from EMC and others. There are thousands of customers like that and with this round of funding, we’ll expand our coverage of the market so that we can get to many more of them.”
Most recently, the company has advanced combined storage and computation with Hadoop MapReduce, significantly reducing customers’ infrastructure costs for separate servers dedicated to analytical processes.
The company’s flagship product is dsNet object storage system, which protects both data and metadata equally. The company says dsNet is more reliable and more efficient than traditional RAID-based storage systems. It uses unique Information Dispersal technology to slice and disperse data, so that single points of failure and centralized synchronization points are eliminated. As data is distributed evenly across all storage nodes, metadata can scale linearly and infinitely as new nodes are added, thus reducing any scalability bottlenecks and increasing performance and reliability.
The company focuses on delivering high performance dispersed object storage solutions across multiple classes of storage to address more demanding customer workloads. It provides scalability to customers don’t have to worry about outgrowing their storage. | | 12:00p |
OpenStack in AsiaPac: Morphlabs Raises $10 Million  Morphlabs mCloud Data Center Unit packages SSD-powered infrastructure and software to create private clouds for service providers or enterprises (Photo: Morphlabs).
OpenStack public and private cloud solutions provider Morphlabs has completed $10 million Series D round, bringing total funding to $22.5 million. The round was led by Tallwood Capital and included existing investor G2iG. The company is using this round to extend its position in AsiaPac, where it says it is seeing healthy demand for OpenStack-based clouds.
“We’re enabling a lot of telcos and service providers in a lot of emerging markets,” said Yoram Heller, the co-founder and chief operating officer of Morphlabs. “These guys needed help in terms of understanding and learning OpenStack, and we help them set up cloud offerings.”
It is conducting expert OpenStack training programs throughout Asia, and is organizing the first OpenStack Hack-a-thon called StackHack.
Morphlabs OpenStack-powered mCloud solutions combine best-of-breed software and optimized hardware to deliver an efficient, open source cloud. OpenStack, the open source cloud operating system, is rapidly gaining momentum in Asia as a growing number of enterprises and service providers are deploying OpenStack-based infrastructures to power their public and private clouds.
Accelerating OpenStack and Morphlabs
“The investment from two leading investors such as Tallwood Capital and G2iG is about much more than capital,” said Satoshi Konno, co-founder and GM of Asia Operations, Morphlabs. “It’s about accelerating the Asia OpenStack market and solidifying Morphlabs’ leadership position in it. As an open source technology, OpenStack represents a paradigm shift for enterprises and service providers and our OpenStack Training for the Asian market will provide ideal support for our users and simplify the deployment of OpenStack clouds throughout the region.
Morphlabs has developed partnerships with market leaders in Asia, including the server vendor NEC, to deliver OpenStack-based data center technologies. Last month, NEC announced a partnership with Morphlabs to deliver pre-configured and certified mCloud Solutions with NEC Express 5800 servers.
“This Last round of funding should take us into profitability. We’re taking this funding to solidify position in Asia,” said Heller. Morphlabs has offices in the Phillipines, Singapore, Japan and Australia, in addition to its Los Angeles headquarters.
The company is one of the founding gold members of OpenStack, but began in 2007 as a Eucalyptus shop before there was such a thing as OpenStack. We’re happy to see growth in the community,” said Heller. “Our focus on complete systems approach. Not just software, but ultimately when and how you measure use against amazon. Now these guys have a stick to measure by. We’re the only ones focused on the complete metrics.”
The company’s target market is the dominant telcos in each country, the top 3. “Clients are usually a multi-billion telco,” said Heller. “They have a skill gap still. They’re looking for a technology partner to help them do this.”
The company launched its turnkey public cloud offering, mCloud osmium back in February. | | 12:30p |
Energy Savings for Legacy Equipment – Realistic? Jeff Klaus is the general manager of Data Center Manager (DCM) Solutions at Intel Corporation. Jeff leads a global team that is pioneering power- and thermal-management middleware, which is sold through an ecosystem of data center infrastructure management (DCIM) software companies and OEMs.
 JEFF KLAUS
Intel
We should all be very encouraged that energy conservation has been widely embraced by a spectrum of technology manufacturers. Data center energy use, as measured by SPECPower, has dropped by 40 percent over the last five years – even as performance has increased nearly 10x during the same period.1 Quite a testament to technology advances and IT design best practices. However, data center managers often ask us if there is any way they can cut back the energy consumption levels of their legacy equipment. They don’t have the budget to replace inefficient hardware, or re-architect their solutions to take advantage of virtualization or retrofits.
Our answer is simple. If you can’t afford to upgrade, you can’t afford to NOT introduce energy optimizations. Energy savings and legacy systems are not mutually exclusive, nor should they be examined in isolation. Efficient resource utilization comes from understanding how the systems consume the shared resources in the data center (regardless the size).
Getting Started: Gaining Visibility
Advanced energy management solutions provide real-time power and temperature data, and automate the logging of historical performance in one place. Usually implemented as a middleware platform, they are generally non-invasive, and support a broad range of interface protocols facilitating monitoring of legacy and current equipment. Also, energy management solutions utilize collected data to enhance real-time decision making and long-term planning.
The first step toward optimization is to understand your power use. At-a-glance thermal and power maps can help identify the biggest power consumers, and correlate their power and temperatures with workloads. Even when you can’t upgrade or replace these systems, identifying the most inefficient infrastructure offers opportunity for making affordable improvements yielding significant power savings:
- “Ghost” servers and under-utilized servers can be identified and workloads reassigned. Some servers can be put into low-power mode or even powered down during less busy periods. Before introducing energy management solutions, most data centers have approximately 15 percent of their servers idle at any point in time; yet, these servers are still drawing power.
- Rows and racks can be rearranged to avoid hot spots in the data center that drive up cooling costs. On an on-going basis, monitoring temperature by rows, racks, and individual servers can allow you to spot temperature changes before they escalate, and when they can be proactively remedied without driving up cooling costs.
- Airflow handlers can be positioned for maximum efficiency, with potentially some reductions in the numbers of required units.
Next Steps: More Control and More Savings
The same energy management solution that provides fine-grain visibility of real-time conditions should let you introduce and enforce power policies that maintain optimal operating conditions. With a superior solution, power thresholds can be set along with automated alerts and triggered responses that protect against equipment-damaging power spikes.
Maintaining a consistent temperature has a major impact on the reliability and lifespan of data center equipment. Armed with historical trending data from the data center, the IT and facilities teams can intelligently define and maintain an optimum operating temperature for their particular systems. Instead of over-cooling, temperature often can be raised because monitoring and alerts guard against hot spots.
It should be noted that data center managers report that cooling systems account for almost 50 percent of the data center energy budget. Raising data center temperature by as little as one degree can lower cooling energy costs by four percent.2 A compelling business case for energy management can be built on this fact alone.
Besides adjusting power and temperature thresholds, energy management solutions can help IT maximize rack densities in the data center. With appropriate protection from power spikes and elimination of hot spots, each rack can be optimally loaded to maximize density while adhering to cooling requirements.
Facing the Future With Energy Facts in Hand
Eventually, every legacy system becomes impractical with operating costs that skyrocket after end of life support. By putting an energy management solution in place, the data center team will have insights that drive smart decisions regarding decommissioning systems during migrations and upgrades.
Implementing a collection point for energy management data is an excellent start to developing a long-term power management strategy. Combining real-time and historical trending information with business processes and best practices ensure ongoing energy requirements remain minimized. This also paves the way for an eventual longer range decision process. In this “sooner or later” scenario, the “sooner” option provides a longer payback period and faster time to savings.
When data center managers ask us about best practices for saving energy in a data center with legacy equipment, we answer by showing them how their systems currently consume shared resources in the data center. This data typically provides the necessary clarity and insight that enables them to identify areas with the potential for the biggest returns.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
Endnotes:
1Intel’s tests verify a reduction in server energy consumption by 40 percent since 2008. See: http://www.datacenterknowledge.com/archives/2012/06/12/server-efficiency-aligning-energy-use-with-workloads/
2From “DCM Overview 1212” slide eleven: Data center managers can save 4 percent in energy costs for every degree of upward change in the set point.“ (Sun Microsystems) http://www.datacenterknowledge.com/archives/2008/10/14/google-raise-your-data-center-temperature | | 1:00p |
In Bid For Enterprise, Rackspace Now Manages Single-Tenant VMware Rackspace is stepping up its pursuit of big enterprise customers, filling a major need with a dedicated VMware vCenter offering. The company’s new single tenant VMware offering extends enterprise on-premise VMware environments into Rackspace data centers, where the company will manage single-tenant VMware deployments. This moves the company upmarket as well as providing enterprises with a solid entry point to hybrid cloud infrastructure.
“This new service has been designed to enable customers to migrate workloads out of their data center and into a Rackspace data center,” said Rackspace CTO John Engates. “This allows Rackspace to do what we do best, which is providing a fully managed hybrid cloud hosting service backed by Fanatical Support with maximum uptime.”
Rackspace has been a VMware shop for years. Long before it even offered public cloud, the company was doing VMware deployments in a shared fashion. It runs one of the largest VMware environments, in addition to operating the largest OpenStack-based cloud. “We’ve always been a multi-technology, technology agnostic company,” said Engates. “This speaks to a wider set of companies that have a wider variety of needs.”
A Foot in the Enterprise Door
This move is a logical next step in Rackspace’s hybrid cloud strategy. Single-tenant managed VMware gives it a foot in the door with many companies, allowing them to expand into other services and accelerating the journey to hybrid cloud.
“We’ve had aspirations to go upmarket towards the enterprise, and this is just one step in that direction,” said Engates. “Every once and a while we’d come across a larger company and they’d ask ‘Can’t I use my own vCenter? My own tools?’ We’d have to say no,” said Engates. ”Every once in a while we would do a one-off deal, what we’d call a managed colo environment. What we’re trying to do here is address that segment of the market.
Engates said it also differentiates Rackspace amongst its chief competition: pure public cloud providers like Amazon Web Services who can’t offer high end managed services in a dedicated physical setup in addition to cloud. “This fills a big hole in our offering,” said Engates “Anyone that uses VMware can fit into this. It can be tailored and customized. They don’t have to fit squarely in the managed hosting packages. We offer fanatical support and management atop of it.”
Familiar Environments
The Rackspace hosted VMware environment will look and feel like an extension of the customer’s own data center by leveraging the same vCenter APIs used by their existing tools. Customers maintain control and management capabilities through the use of dedicated vCenter Servers, vCenter APIs, compatible third-party tools, and their existing service catalogs, orchestration platforms and portals. Customers can utilize their familiar orchestration tools to conveniently provision virtual machines (VMs) in minutes, while providing visibility into costs and usage whether on- or off-premise through a single user interface.
“Utilizing Rackspace’s hybrid cloud portfolio gives customers the choice to find the best fit for their applications and workloads, all while offloading data center management so that they can focus on their core business,” said Engates. | | 1:38p |
IO Conducts PUE Faceoff: Modular vs. Raised Floor  A row of IO Anywhere data center modules at the IO Phoenix facility. (Photo: IO)
Which delivers the best energy efficiency: raised-floor space or a modular data center? It’s a hard question to answer in a satisfying fashion without comparing apples and oranges. What about a comparison of modules and raised-floor space within a single facility, using the same building envelope and chiller plant, with the data reviewed by a third party?
IO had exactly this situation in its IO Phoenix facility, which is split between an initial phase of raised-floor data center space and a second phase filled with the company’s IO.Anywhere modules. So IO asked the local utility, Arizona Public Service (APS) to review 12 months of data for both environments and calculate their respective operating costs and Power Usage Efficiency (PUE).
The result? APS found the modular design offered significant improvements in efficiency and economics. The utility said the raised-floor area within IO Phoenix had a PUE of 1.73 for 2012, while the modular data center environment had a PUE of 1.41. That difference translates into an annual savings of $200,000 per megawatt of average IT power for customers using the IO.Anywhere modular build-out instead of the raised floor space at IO.
“Our calculations did show that the IO.Anywhere modular data center uses less energy than a traditional data center build-out, at least in the case of this IO data center” said Wayne Dobberpuhl, APS Energy Efficiency Program Manager. “Moving forward, we are working with IO to establish the right baseline for assessing the appropriate rebate for this efficiency work under our Solutions for Business program.”
Customer Analysis Focuses on Operating Costs
IO has a clear perspective in this debate, as the company readily admits. “Our actions have long spoken to our faith in the modular design,” said Patrick Flynn, the Lead Sustainability Strategist for IO.
But Flynn said the data will be “exceptionally valuable” to enterprises pondering the best deployment option and evaluating modular units. He noted the importance of having Arizona Public Service provide an independent analysis of the potential energy savings.
Operating costs have become a focal point in comparisons between traditional hot aisle/cold aisle data centers and modular offerings like IO.Anywhere, that are built in a factory using repeatable designs and can be shipped to a customer premise or provider facility. A 2012 study from 451 Research found that modular data centers are usually cheaper to build than traditional raised-floor space, but said the economics of the two deployment models were complex and didn’t stake out a position on the OpEx economics of the two models.
In analyses with many moving parts, proponents of each approach have found support in PUE, the energy efficiency metric popularized by The Green Grid that compares a facility’s total power usage to the amount of power used by the IT equipment, revealing how much is lost in distribution and conversion. The average PUE is about 1.8. | | 2:30p |
With New $3 Billion Credit Line, Digital Realty Gets Cheaper Money, and Lots of It  The Digital Realty Trust data center in Chandler, Arizona.
Digital Realty Trust is already a force to be reckoned with in the data center industry. Now it has access to a lot of money – a $3 billion credit facility – on better terms than before. Digital Realty’s investment-grade status provides a lower cost basis for all its operations, including financing acquisitions and tenant improvements, and with its latest moves it clearly believes it’s the REIT time to kick into high gear.
The company refinanced its global revolving credit facility and term loan. All-in pricing was reduced by 20 basis points for its $2 billion revolving credit facility and by 25 basis points for a $1 billion term loan.
The refinancing allowed the company to reduce pricing, extend loan maturities and increase its aggregate commitments by $450 million. The combined $3 billion dollars is the fifth largest unsecured credit facility among US real estate investment trusts (REITs). It’s a good business, so the money isn’t scared to back it.
“We are very pleased with the strong demand we received from the international lending community to participate in the refinancing of these facilities, which were oversubscribed with commitments totaling $4.6 billion from 27 financial institutions from around the globe,” said William Stein, Chief Financial Officer and Chief Investment Officer of Digital Realty. ”To satisfy this demand, we upsized our Global Revolving Credit Facility by $200 million and increased our Term Loan by $250 million. In addition, the improved pricing grid is equal to or better than any widely syndicated credit facility for a U.S. large cap investment grade REIT, including those with a credit rating higher than DLR’s BBB/Baa2 rating. We believe these positive trends illustrate the institutional lender community’s view on the strength of our balance sheet and underlying business, while providing us with greater financial flexibility as we continue to expand our portfolio globally.”
Credit Terms as a Business Differentiator
Why do the details of a credit facility matter? When Digital Realty has access to cheaper money than its competitors, the company can leverage that advantage in a number of ways, gaining cost advantages on competitors when it is making acquisitions, financing construction and even competing for customer leases.
Digital Realty’s $2 billion dollar credit facility matures in November 2017 and has two six-month extension options. It can be increased up to a total of approximately $2.55 billion U.S. dollar equivalent. How much better are the terms? Pricing for the facility, based on the company’s senior unsecured debt rating of BBB/Baa2, was reduced from 125 to 110 basis points over the applicable index for floating rate advances and the annual facility fee was reduced from 25 to 20 basis points.
The $1 billion multi-currency term loan still matures in April 2017, with two six-month extension options added. Total loan commitments can be increased up to $1.1 billion. Pricing for the term Loan was reduced from 145 to 120 basis points, based on the company’s unsecured debt rating. In addition, the company was able to achieve improved covenants terms and definitions, including the removal of the tangible net worth covenant and reducing the cap rate from 8.25% to 8.00% on data center assets.
To support the global nature of Digital Realty’s operations, funds from the combined facilities may be drawn in multiple currencies several denominations: U.S, Canadian, Singapore, Australian and Hong Kong Dollars, as well as Euro, Pound Sterling, Swiss Franc, Mexican Pesos and Japanese yen. The company’s ongoing global expansion means it probably will be drawn in a variety of denominations.
“We would like to acknowledge Merrill Lynch, Pierce, Fenner & Smith Incorporated, Citigroup Global Markets Inc. and J.P. Morgan Securities LLC’s efforts in their capacity as Joint Lead Arrangers and Joint Book Running Managers which led to a successful syndication of the two facilities and extend our gratitude to the entire bank group for their overwhelming support of the Company,” added Stein. | | 2:54p |
OnApp, SolidFire Team to Deliver Cloudy SSD Storage  A “five stack” unit of SolidFire’s alll-SSD storage units.
OnApp and SolidFire have announced an integration, allowing OnApp customers to launch high performance clouds using SolidFire and its all-Solid State Drive (SSD) storage. OnApp cloud providers can now easily offer SolidFire storage for more disk-intensive applications with quality of control and guaranteed IOPS performance.
“The OnApp platform has always offered service providers a very broad range of storage choices, and SolidFire is an important new option for customers who need consistent high disk performance,” said Kosten Metreweli, OnApp’s Chief Commercial Officer. “OnApp customers like Crucial Cloud Hosting have already shown that there is real appetite for guaranteed IOPS in the market, and we’re looking forward to seeing what other cloud providers can do with SolidFire, too.”
The integration uses the SolidFire API to manage SolidFire clusters through the OnApp control panel. SolidFire storage clusters are managed in just the same way as other forms of storage in OnApp Cloud. The first cloud provider out of the gate with the joint solution is Australia’s Crucial Cloud Hosting. A video on that solution is here.
With SolidFire and OnApp, cloud providers can provision storage performance and capacity to customers independently, and guarantee disk performance for individual workloads by enabling them to choose IOPS (Input/output Operations Per Second) for their virtual machines.
“SolidFire storage is designed for performance and scale in multi-tenant clouds, so it’s a perfect fit with OnApp and its focus on the cloud service provider market,” said Jay Prassl, VP Marketing, SolidFire. “One of the great things about using SolidFire storage with OnApp is how easy it is for a service provider to take our fine quality of service (QoS) control, and present it directly to customers – it’s right there in the OnApp control panel. Now customers running disk-intensive apps and databases on dedicated hardware can get the same or better performance in an OnApp cloud – guaranteed.”
SolidFire’s all-solid state architecture offers up to 7.5 million IOPS and 3.4 Petabytes in a single cluster. This expands the appeal of an OnApp cloud to those customers that would normally rely on dedicated servers, due to performance sensitivities that other cloud storage doesn’t particularly address. |
|