Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, January 10th, 2013
| Time |
Event |
| 11:36a |
Mirantis Gets $10M to Ramp Up its OpenStack Play  OpenStack specialist Mirantis has received $10 million in funding from Dell Ventures, Intel Capital and WestSummit Capital to accelerate its growth.
To accelerate its growth in the OpenStack market, Mirantis, a large OpenStack systems integrator, today received $10 million in growth capital financing from Dell Ventures, Intel Capital and WestSummit Capital. OpenStack is an open source cloud operating system, originating from a NASA and Rackspace project and evolving to its current foundation-managed status.
Mirantis, which is based in Mountain View, Calif., was previously financed through sales revenues. “We believe that OpenStack is on its way to becoming a universal control plane for the entire application infrastructure fabric,” said Adrian Ionel, president and CEO of Mirantis. “This phenomenon is transforming the industry. To help accelerate this transformation in a meaningful way, we need strong strategic investors like Dell Ventures, Intel Capital, and WestSummit Capital. We consciously picked investors who aligned with us on vision and our commitment to accelerate OpenStack adoption around the world.”
Dell and Intel are active participants in the OpenStack community. WestSummit Capital is a China-based global growth stage technology investment firm that invests in industry-leading technology companies with a substantial presence or strategic interest in China.
The privately-held Mirantis is working to speed adoption of OpenStack clouds by service providers, SaaS vendors and enterprises. Boris Renski, Mirantis co-founder and executive vice president, was elected to the board of the OpenStack Foundation, which provides leadership to the community and guidance over development efforts.
Production Deployments
Mirantis continues to amass OpenStack experience, helping its customers in more than 30 OpenStack deployment projects in the past 18 months. Mirantis has assisted many early adopters – organizations such as NASA, WebEx, Gap, PayPal, Internap, and AT&T – successfully launch production-grade clouds.
“Mirantis is a true example of the power of OpenStack technology,” said Lisa Lambert, vice president of Intel Capital and managing director of the Software and Services sector. “In just a short amount of time, Mirantis has established itself as a key OpenStack systems integrator by helping numerous early adopters realize the value of the open source cloud movement. Intel Capital looks forward to working with its team as it continues to expand its global footprint.”
That global footprint includes the Asia-Pacific market, an opportunity addressed in today’s investment. “We see tremendous demand for OpenStack in the APAC market and China in particular, said Elise Huang, partner at WestSummit Capital. “Beijing generates more traffic to the OpenStack.org Website than any other city in the world today, and China hosts some of the largest cloud deployments in the world. With our investment, we aim to help Mirantis establish a stronghold in the APAC market.”
“OpenStack is a central component of Dell’s cloud strategy and this investment reflects our company’s commitment to open source platforms that give our customers more options and flexibility,” said Nnamdi Orakwue, vice president of Dell Cloud.. “We recently announced OpenStack as our open source cloud platform of choice for public and private clouds. The partnership with Mirantis demonstrates our commitment to the community and our goal of becoming a leading contributor to OpenStack. Dell continues to make strategic investments that enhance our portfolio and customer value,”
Mirantis has dedicated developers contributing to OpenStack Quantum LBaaS and helps run the Bay Area OpenStack user group in Silicon Valley. The company also offers a training program to expand the pool of skilled engineering talent and expertise in OpenStack. | | 12:30p |
Equinix To Build Fourth Tokyo Data Center  This Equinix data center is one of three existing sites in Tokyo. The company has announced plans for a fourth Tokyo data center. (Source: Equinix)
Equinix (EQIX) announced a $43 million investment to build its fourth International Business Exchange data center in the Ōtemachi district of Tokyo. Equinix data centers in Tokyo currently offer more than 194,000 square feet of space and the fourth facility will help meet strong demand, as well as give customers a broad choice of network connectivity options.
Equnix’s TY4 will be located in the heart of Tokyo’s business district, an international financial center and the Internet Exchange hub in Japan. With its close proximity to major financial exchanges, Equinix can appeal to large financial firms, Internet service providers and content providers.
“Tokyo is a strategic market for Equinix as it is one of the largest data center markets worldwide and a primary gateway for traffic from the U.S. to Asia,” said Kei Furuta, managing director, Equinix Japan. “We believe our expanded presence in Japan will give our customers greater access to global network connectivity and capitalize on the opportunities presented by the Japanese market. We would like to thank Mitsubishi Estate for its support in securing ‘Otemachi Financial City’ for our TY4 data center.”
Equinix will launch the new IBX data center in two phases, providing total capacity of 750 cabinet equivalents. The first phase is set to open in the third quarter of 2013 with 450 cabinet equivalents. TY4 will have direct fiber connectivity to Equinix’s three other IBX data centers in the city, which will further facilitate customer interconnections.
International Data Corporation (IDC) projects that the data center outsourcing market in Japan will grow to 1.4 trillion yen (US$20 billion) in 2015, with the colocation market reaching 742.6 billion yen (US$9.26 billion) in 2015. | | 1:00p |
Cloud Builders Still Leasing Data Center Space  Apple is building huge new data centers in three states, including the North Carolina iDataCenter pictured above. Meanwhile, it’s leasing large quantities of data center space in Silicon Valley (Photo: Apple)
Many of the largest cloud computing providers opted to lease new Internet infrastructure in 2012, according to new data from a veteran market watcher. The report highlights the shifting tides in the “buy or build” decision, in which geography and market economics are contributing to a two-tier infrastructure for many of the largest Internet players, with footprints split between company-built data centers and wholesale space.
Apple, Facebook and Microsoft were among the largest consumers of turn-key “wholesale” data center space in 2012, according to Jim Kerrigan, Director of the Data Center Group at Avison Young. Microsoft leased 12 megawatts of new wholesale space in 2012, with Facebook (10 megawatts) and Apple (8 megawatts) not far behind.
The trend is notable because all three companies have recently been building their own massive data center facilities. Facebook has 1.5 million square feet of data center space that is either built or nearing completion, while Apple has finished its huge iDataCenter in North Carolina and is building new facilities in Oregon and Nevada. Microsoft has built its own server farms in seven sites around the U.S. and Europe over the past 5 years.
Economics, Scale and Deployment Speed Drive Decisions
The build vs. buy decision is a key decision for web-scale companies with massive cloud computing operations. Company-built facilities offer economies of scale and can be customized with efficient designs that offer savings on power bills. In the wholesale data center model, a tenant leases a dedicated, fully-built data center space. This approach offers faster deployment of new capacity, and the ability to manage capital spends in regions where it’s expensive to operate data centers.
Why the shift from building to buying in 2012? There are several reasons:
Geography: After years of building huge data centers in remote areas, in 2012 the geographic focus shifted back to historic Internet hubs in northern Virginia, Silicon Valley and Chicago. Apple and Facebook have moved armadas of servers to rural locations in North Carolina and Oregon that offer cheap power and cheap land. Cloud builders will continue to do this going forward, but a portion of their infrastructure must always be housed near the Internet’s key intersections, where they can connect with dozens of other networks. Both land and power are more expensive in these Internet hubs, resulting in different economics for large-scale new construction. That’s why the largest wholesale data center providers have a large presence in these markets.
Market Dynamics: The data center building boom in recent years has brought more supply online in key markets like northern Virginia and Santa Clara, where multiple wholesale providers wind up competing for large deals. This additional capacity favors large “super wholesale” customers that can lease vast chunks of server space. These companies – which include Facebook, Microsoft, Apple and Rackspace – can use their scale as leverage in pricing, and sometimes get discounts by working deals for space in multiple markets. Analysts have expressed concern that this may erode returns for developers of wholesale space, but these mega-deals also have major benefits for providers, as they lock down major tenants.
Advances in Wholesale Data Center Products: Design issues are also a factor in the buy vs. build equation. Facebook’s company-built data centers in Prineville and North Carolina featured design customizations that slashed the company’s power bill to run its servers. Last month Facebook said it had deployed its Open Compute designs in leased wholesale data center space. Two of Facebook’s wholesale providers – Digital Realty Trust and DuPont Fabros Technology – have worked to adapt these designs in their newer data centers. That effort extends beyond Facebook, as seen in Digital Realty’s Turn-Key Flex program, which offers additional customization options on “plug-n-play” space.
Is the shift to wholesale leases a long-term trend? Apple and Facebook are continuing to build their new facilities, and appear poised to continue that effort. Rackspace is committed to a wholesale leasing model, while Amazon tends to build out its own sites in retrofitted properties (including some leased from Digital Realty). The wild card is Microsoft, which built its own facilities from 2007 to 2011, but leased wholesale space in three major markets in 2012. | | 1:30p |
Turning DCIM’s Big Data into Actionable Insight Gary Bunyan is Global DCIM Solutions Specialist at iTRACS Corporation, a Data Center Infrastructure Management (DCIM) company. This is the ninth in a series of columns by Gary about “the user experience”. See Gary’s most recent columns: Unlock Your Capacity By Unplugging Your Ghost Servers and Boost Rack Densities Without Racking Your Brain.
 GARY BUNYAN
iTRACS
Recently, I met with a new customer in Europe – an industry-leading online company with more than two dozen data centers serving hundreds of thousands of consumers worldwide. This company is using Data Center Infrastructure Management (DCIM) to manage moves, adds, and changes in multiple facilities, extend the life of its existing physical infrastructure, and maintain the highest levels of availability for a very demanding customer base. As you can imagine, a lot of our conversation focused on how to optimize capacity utilization and business continuity.
But the real topic of discussion? The Data!
By this, I mean the massive amounts of data “locked” inside the customer’s data centers, buried across their infrastructure. What can be referred to as “DCIM’s Big Data.”
DCIM’s Big Data is way bigger than just individual data points about power usage or asset leases. It’s about power, space, cooling, CPU utilization, business output, work-per-watt-per-business unit, facilities loads, network connectivity and much more – all of the interrelationships and inter-dependencies between assets, systems and applications that ultimately drive the overall performance of the data center.
DCIM’s Big Data is data about any asset, application, process, or workflow that impacts the performance of the physical ecosystem and its ability to serve the ever-changing needs of the business.
Conventional thinking says DCIM is only about either power or asset management, isolated from each other. However, that is a dangerously limited view of the problems DCIM needs to solve. Instead, DCIM acts an open information highway that lets you access, aggregate, and interconnect any data coming from anywhere and then turn it into relevant insight. This is DCIM’s true role as a game-changer.
From this perspective, DCIM’s job is to make the data:
- Understandable – collected and made visible in a way that makes it meaningful
- Relevant – enriched with the context of space, connectivity, and Time to provide a holistic understanding of the entire interconnected physical ecosystem
- Interconnected – presented in relationship rather than in silos, creating rich new layers of meaning
- Actionable – presented within a single holistic management framework so it can be instantly acted upon to drive transformational change
DCIM is the only viable way of “unlocking” the data sitting in the physical infrastructure. Which is why my European client was so interested in the bi-directional information exchange made possible by a data-sharing technology called the DCIM Open Exchange Framework.
Bi-directional Exchange of Information
The DCIM Open Exchange Framework is a core DCIM technology that connects all external data sources, systems, and workflows to the DCIM environment, allowing for the free bi-directional exchange of information. Using the framework, DCIM can send or receive any data point from any vendor or system using open, industry-standard interfaces and protocols.
The framework creates access to unlock and leverage DCIM’s Big Data:
- Streams of data from virtually any source are aggregated, analyzed, and presented as actionable information within a single point of management
- The customer’s business systems and workflows are enriched by pulling information from DCIM back into their enterprise environments, such as ticketing, supply chain, etc.
- Processes and workflows are enhanced both within and beyond the DCIM ecosystem – a true bi-lateral exchange of business value.
 Maintenance schedules and ticketing activities can be shared with outside systems. Click for full size graphic.
Power, Maintenance and Financial Data
Here’s examples of available insights:
Let’s say you want to manage all of your enclosures, PDUs, and other assets across all of your data center sites. Using the framework approach to share data, you can stream various data sources into the DCIM system, combine them in one view, and manage them from a single screen. Here’s an example of three different types of data combined into a single interconnected view:
- Power Log – you can stream data that shows you real-time power draws of each asset within the context of the entire power chain. Where are biggest power draws and why? Where is “stranded power” being wasted? What are the impacts upstream and downstream and where are the best energy efficiency opportunities?
- Maintenance Log – you can track maintenance on each PDU so you know what is scheduled when, why, and potential impacts on other PDUs. You can then feed this data into outside ticketing systems like BMC Remedy so everyone knows what is going on and all maintenance activities are managed with precision.
- Financial Log – you can manage each enclosure within the context of the business units and/or colo customers being supported. You can track power costs, space consumption, contract terms, and other financial-related information to make sure the assets are being managed for bottom-line impact to the business. This is a great tool for cols and service providers.
 Financial information can be segmented by user, business unit, or any other category. Images courtesy of iTRACS. Click for full graphic.
How the Data is Combined
DCIM’s greatest value lies in the relevant combinations of the information presented.
For example, let’s say you’re doing scheduled maintenance on a PDU and suddenly a power outage occurs impacting your other (backup) PDUs. A pool of revenue-generating servers is now under threat to go down, directly impacting your company’s revenue stream. Because you have access to interconnected information about (1) power, (2) asset, (3) redundancy, and (4) maintenance, you can swiftly take coordinated action to avoid calamity. This would be impossible if your data was disconnected, in silos, neither readily available nor offered in a meaningful context.
The bottom line? Infrastructure management is all about interconnected information. Either you have it or you need it. If you don’t unlock DCIM’s Big Data, it may end up locking you into very narrow management windows with very limited control over assets.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 2:15p |
Green Grid: Eco Mode Offers UPS Savings, With Caveats  This chart from The Green Grid outlines the potential energy efficiency gains from using an “eco mode” UPS configuration (Source: The Green Grid).
The energy-saving UPS configuration known as “eco mode” has become a viable option to help increase energy efficiency and save costs in the data center, according to The Green Grid, which offers some caveats and tips on implemention.
The Green Grid, a global consortium focused on improving energy efficiency in data centers, has released a white paper that takes a deep dive into Eco Mode in Uninterruptible Power Supply Systems (UPS). Eco mode boosts UPS efficiency and PUE when properly designed and deployed, and is identified as one of the energy savings recommendations in The Green Grid Data Center Maturity Model (DCMM). The white paper looks at deployment considerations and makes recommendations, discussing energy efficiency gains and economic benefits.
The need for energy efficiency is becoming more prevalent. There are many options to incrementally improve power system efficiency. UPSes can have several modes of operation, including eco mode, which is typically the highest mode of efficiency. Not all eco mode designs are the same, and the differences can matter, according to The Green Grid.
Switch-Over Timing is Crucial
There are several points that data center operators need to consider prior to implementation and several things that need to happen before it’s more widely used. Data center operators should analyze the power distribution within their data centers. It’s also extremely important to match the switch-over time for an eco mode UPS to the power supply ride-through or static switch time. Utility power needs to be a certain level of quality for eco mode to be considered.
There’s a variety things that need to happen prior to wider implementation. Before many data centers will be able to move from Level 1 UPS Eco Mode implementation to Level 3 implementation in The Green Grid’s Data Center Maturity Model, the report says the following will need to take place:
- Voltage immunity curves should be updated to help all designers and operators by accounting for the shorter ride-through times now implemented in many servers.
- Users should identify their critical business types and the characteristics of their loads to determine the best implementation mode for their business.
Prior to implementing eco mode, everything needs to be tuned properly, and the whitepaper tells you what factors need consideration. How does it effect transfer times? Fault tolerance performance?
The savings and efficiencies from eco mode are highlighted in a variety of metrics. The Green Grid examined the impact on power usage effectiveness (PUE) of UPS eco mode efficiency relative to UPS double conversion efficiency with three types of data centers split by efficiency as measured by PUE:
- Best in class PUE = 1.3
- Current PUE = 1.6
- Legacy = 2.0
Energy Efficiency Gains Outlined
There are all sorts of figures and graphs showing the overall efficiencies in all three data center examples when using Eco Mode. PUE gets better and the economic benefit from energy savings can also be estimated. Energy savings ranged from less than $100,000/year savings for 1 megawatt (MW) IT load to almost $500,000 per year for 5 MW load in a legacy data center, based on the operating assumptions listed in the report.
UPS inefficiencies generate heat in the data center, which is removed by the data center’s cooling system. That heat removal is accounted for with the PUE cooling contribution. Here’s an excerpt:
There is a close conceptual relationship between UPS Eco Mode operation and the well-known “free cooling” operation that allows data centers to reduce energy consumption in their cooling systems when the outside ambient air temperature and humidity are within acceptable levels. As with external weather conditions, data center operators can make the most of available high-quality utility power to improve efficiency. In this case, rather than free cooling, it is “free power quality” that data centers can take advantage of, available when the UPS Eco Mode uses external utility power that is already in highquality condition. Similar to free cooling, where the chiller is not used or is turned off when not needed, the UPS Eco Mode turns off the UPS rectifiers/inverters when the availability of “free power quality” makes them unnecessary.”
The bottom line: Eco mode boosts UPS efficiency from the usual high end of about 92% to 96% for the three data center PUE examples to as high as 98% to 99%.As part of its ENERGY STAR program, the EPA provides a 25% efficiency allowance to the manufacturer for the UPS Eco Mode capability.
Get the white paper here | | 3:00p |
IceWEB Launches SSD Platform Unified data storage appliance company IceWEB (IWEB) introduced a new line of all Solid State Unified Storage Products last week called the Ice Platform.
The new IceWEB SSD models replace the traditional spindle-type hard drives with solid state drives across the IceWEB line. New solid state appliances, called IceWEB ERX (environmentally responsible acceleration) provide improved IO performance, as well as minimizing heat generation and power consumption.
“Our new ERX platforms deliver all the IceWEB unified storage award-winning reliability, stability and an enterprise feature-set —key operational metrics for unified storage purchasers—while dramatically enhancing performance through our unique implementations of solid state technology,” said Gaurang Mehta, CTO of IceWEB. ”They are faster, consume less power, generate less heat, and offer IO performance levels exceeding 100K IOPS and 1.2 GB/s. For clients, the Total Cost of Ownership characteristics are greatly enhanced while the environmental impact is significantly minimized.”
Company CEO Rob Howe shared his enthusiasm and optimism in the fiscal year end 10-K last year, about 2013 bringing new alliances, markets, products and new sales segments that will “bear considerable fruit for the business.”
IceWEB added a third party endorsement recently from 451 Research, as it issued a report on the IceWEB’s new Cloud Services Unit. IceWEB’s private cloud strategy addresses the on premises private cloud, hosted private cloud and the on premises cloud connecting to third party public clouds. The report indicates that IceWEB will provide an end-to-end integrated cloud storage offering that comprises all of the necessary hardware and software for backend and frontend management. | | 3:30p |
How to Build a Zero Downtime Data Center Network The modern data center has evolved far beyond the standard one-to-one server environment. Today, we are seeing virtualization, cloud computing and many more users accessing data center resources. This new trend, coupled with IT consumerization, forces many IT administrators to find ways to more efficiently manage such a diverse platform.
As more users and workloads begin to access a data center, engineers are beginning to deploy more agile environments capable of growth and agility. These high-density platforms are designed to support large amount of users, numerous different workloads and create a more robust data center. This is where a new challenge arises. Although these platforms are designed to handle more users and higher amount of virtualization – what happens when these systems become the primary access point for most (if not all) users?
This is where Juniper’s Virtual Chassis Technology can really help. In the How to Build a Zero Downtime Data Center Network with Virtual Chassis Technology webinar, Jim Witherell of Hillenbrand Inc and Harsh Singh of Juniper Networks discuss how this type of agile chassis can create a data center network capable of zero downtime. In the discussion, the webinar covers numerous different topics including:
• Creation of VLANs between different Virtual Chassis
• Non-stop service upgrade features
• Utilizing JunOS as a method to sync data
• Using standard-based protocols to create a more resilient Virtual Chassis
• Integration with other components within a data center
Advanced switching methodologies and increased chassis performance has created an environment capable of handling certain types of outages – thus preventing downtime. Engineers are continuously looking for ways to make their data centers more resilient to outages. In the How to Build a Zero Downtime Data Center Network with Virtual Chassis Technology webinar, we’re able to see how Juniper not only approaches this challenge – but helps put it to ease. Click here to watch this webinar now. | | 4:00p |
Data Center Jobs: Colo4 At the Data Center Jobs Board, we have a new job listing from Colo4, which is seeking an Account Executive in Dallas, Texas.
The Account Executive is responsible for acting as consultant to help structure data center and managed service solutions for prospects and customers (including colocation, backup, managed services and cloud products), meeting or exceeding assigned sales objectives and monthly revenue quotas by building and maintaining a customer base, gathering information on prospect and customer business processes and challenges, critical success factors and competitive standing, reviewing complex customer requirements, equipment configurations, feasibility of intended applications, required software and adequacy of implementations plans for customer needs, and providing specific solution recommendations. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. |
|