Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, January 28th, 2014
| Time |
Event |
| 12:21a |
Microsoft Joins Open Compute Project, Shares its Server Designs  These are some of the more than 1 million servers powering Microsoft’s Internet infrastructure. The company is joining the Open Compute Project and sharing the designs of its servers and storage. (Photo: Microsoft)
SAN JOSE, Calif. – In a dramatic move that illustrates how cloud computing has altered the data center landscape, Microsoft is opening up the server and rack designs that power its vast online platforms and sharing them with the world.
Microsoft has joined the Open Compute Project and will be contributing specs and designs for the cloud servers that power Bing, Windows Azure and Office 365. The company will discuss its plans tomorrow in the keynote session of the Open Compute Summit in San Jose.
Why would Microsoft, long seen as the standardbearer for proprietary technology, suddenly make such an aggressive move into open hardware?
“We came to the conclusion that by sharing these hardware innovations, it will help us accelerate the growth of cloud computing,” said Kushagra Vaid, General Manager of Cloud Server Engineering for Microsoft. “This will directly factor into products for enterprise and private clouds. It’s a virtuous cycle in which we create a consistent experience across all three clouds.”
Azure Clouds for the Enterprise
The designs and code for Microsoft’s cloud servers will now be available for other companies to use. A larger circle of vendors will be able to build hardware based upon the designs, which in turn will allow enterprises to create hybrid Windows Azure clouds running on the same hardware across its on-premises data centers and in Microsoft’s cloud.
The Open Compute Project (OCP) was founded by Facebook in 2011 to take the concepts behind open source software and create an “open hardware” movement to build commodity systems for hyperscale data centers. It has spurred the growth of a vibrant development community, which is now expanding its focus to cover network equipment.
Microsoft now wants to reap the benefits of that ecosystem, which has rapidly transformed Facebook’s initial server and storage designs into commercial products. It also hopes to expand OCP’s efforts to include management software.
“The depth of information Microsoft is sharing with OCP is unprecedented,” said Bill Laing, Microsoft Corporate VP for Server and Cloud, in a blog post. “As part of this effort, Microsoft Open Technologies is open sourcing the software code we created for the management of hardware operations, such as server diagnostics, power supply and fan control. We would like to help build an open source software community within OCP as well.”
Competition in the Cloud
Microsoft’s move to align with Open Compute reflects the intensifying competition in cloud services, where Microsoft, Google and Rackspace are among the players seeking to wrest share from market leader Amazon Web Services. Tapping the OCP’s nimble ecosystem of hardware vendors could accelerate innovation on Microsoft’s cloud platform, resulting in an integrated hybrid cloud platform that can keep pace with AWS. | | 12:53a |
Closer Look: Microsoft’s Cloud Server Hardware 
SAN JOSE, Calif. - Google and Facebook may grab most of the data center headlines. But there’s been plenty of innovation going on in Redmond. As it joins the Open Compute Project, Microsoft can now show the world the custom server and storage designs that power its global armada of more than 1 million servers.
This isn’t the first glimpse of Microsoft’s infrastructure. Data Center Knowledge has brought its readers tours of Microsoft data centers in Chicago, Dublin and Quincy, Washington, as well as the first look at its cloud server designs back in 2011.
But with its commitment to OCP, Microsoft will be contributing hardware specifications, design collateral (CAD and Gerber files), and system management source code for its cloud server designs. These specifications apply to the server fleet being deployed for Microsoft’s largest global cloud services, including Windows Azure, Bing, and Office 365.
Microsoft’s cloud server architecture is based on a modular high-density chassis approach that enables efficient sharing of resources across multiple server nodes. A single 12U chassis can accommodate up to 24 server blades (either compute or storage), where two blades are populated in each 1U slot. Each compute blade features 10 Intel dual-core Xeon E5-2400 processors.
Up to 96 Servers Per Rack
A rack can hold three or four chassis depending on the rack height, which can be as tall as 52U (compared to an industry standard of 42U). That allows Microsoft to pack as many as 96 servers into a single rack.
Microsoft’s use of a blade chassis and half-width server and storage blades offers a different design approach than the current OCP offerings. Like Microsoft’s chassis, the OCP Open Rack provides centralized power supplies and fans, but it has a 21-inch wide equipment area, opposed to the standard 19-inch wide trays seen in most racks. Facebook and OCP use a three-wide server design, with processors and memory housed inside a 2U sled. Open Compute storage houses JBODs (Just a Bunch of Disks) on 1U sleds, while Microsoft packs its storage into its half-width blades.
The addition of Microsoft’s designs offers OCP hardware vendors the opportunity to design servers for Windows Azure clouds being run in enterprise data centers, as well as competing for contracts with Microsoft.
Here’s a closer look at additional details Microsoft has supplied about the servers and storage it is contributing to OCP.
Chassis-based shared design for cost and power efficiency
- Rack mountable 12U Chassis leverages existing industry standards
- Modular design for simplified solution assembly: mountable sidewalls, 1U trays, high efficiency commodity power supplies, large fans for efficient air movement, management card
- Up to 24 commodity servers per chassis (two servers side-by-side), option for JBOD storage expansion
- Optimized for mass contract manufacturing
- Estimated to save 10,000 tons of metal per one million servers manufactured
Blind-mated signal connectivity for servers
- Decoupled architecture for server node and chassis enabling simplified installation and repair
- Cable-free design, results in significantly fewer operator errors during servicing
- Up to 50% improvement in deployment and service times
Network and storage cabling via backplane architecture
- Passive PCB backplane for simplicity and signal integrity risk reduction
- Architectural flexibility for multiple network types such as 10Gbe/40Gbe, Copper/Optical
- One-time cable install during chassis assembly at factory
- No cable touch required during production operations and on-site support
- Expected to save 1,100 miles of cable for a deployment of one million servers
Secure and scalable systems management
- X86 SoC-based management card per chassis
- Multiple layers of security for hardware operations: TPM secure boot, SSL transport for commands, Role-based authentication via Active Directory domain
- REST API and CLI interfaces for scalable systems management
| | 1:30p |
How Next-Generation IIMs Help Overcome The Challenges of Mixed-Topology Environments Tal Harel, Director of Marketing, RiT Technologies Ltd. Tal has over 10 years of international marketing experience helping high tech companies, including RDT Group and Jacada Ltd, to improve pipelines, shorten sales cycles and increase earnings.
 TAL HAREL
RiT Technologies
In the previous article, we discussed the advantages of IIM (Intelligent Infrastructure Management) for data center planning and operations. In this article, we discuss the benefits of next-generation IIM for mixed flat/hierarchical topology environments.
Introduction: the Advent of Mixed-topology Data Centers
The rapid transition to cloud computing and virtualization has led many data centers to begin replacing existing three-tier networks with “new” flat topologies – ironically, these are the same “outdated” topologies they abandoned years ago.
The transition reflects changing priorities in today’s increasingly digitized and connected world, in which dependable network traffic delivery, always-available Internet connectivity and seamless support of connected devices take precedence over most other IT considerations.
However, many data centers – especially those that invested millions of dollars and years of work in building their existing networks – are taking a “SWAT-team” rather than a comprehensive approach to the transition. To avoid risk, they are choosing to install limited flat-topology “greenfield pods” within their existing vertical networks, switching to the flat architecture only for their most performance-critical applications, while leaving many time-tested elements intact.
The Challenge: Providing Support for Mixed- topology Architecture
While this hybrid-approach makes sense from a service evolution perspective, it can play havoc with the manageability of the physical infrastructure.
The reason for this is that the IIM (Intelligent Infrastructure Management) systems deployed at most data centers cannot be used for both new and old networks – either because they are vendor-specific, and so cannot be used with new equipment, or because they are not “smart” enough to handle both inter-connect and cross-connect topologies.
So typically, as soon as the homogeneity of the equipment within the data center is lost, the effectiveness of existing IIM systems is compromised, and the IT staff must return to antiquated manual methods of documenting connectivity and tracking errors.
This moves the data center backwards: managers have less visibility and control regarding the physical plant, with reduced ability to conduct effective maintenance, planning, maintenance, provisioning and troubleshooting activities.
The Solution: IIM Systems for Mixed-Topologies
In contrast, the industry’s latest-generation IIMs are built to support heterogeneous data center environments. They can provide full monitoring for all types of networks – whether inter-connect, cross-connect or mixed; fiber or cable; with hierarchical or flat topology; and independent of carrier or data transfer rates.
PatchView+™ by RiT Technologies, for example, accomplishes this by uniquely identifying every piece of network equipment, regardless of its vendor or the process by which it is connected to other equipment. It does this by gaining access to the component’s unique internal connectivity identifier – an element located deep within the RJ45 connector for copper components and the LC connector in fiber equipment.
Since no extra identification components or layers are needed, the entire network in all its complexity is fully visible to PatchView+. This enables the system to automatically build a comprehensive database of the entire physical infrastructure and to keep it accurate in real-time, regardless of configuration changes.
Once an IIM has access to a comprehensive, trustworthy database of the physical plant, a broad range of labor-saving and availability-enhancing applications can be rolled out, including:
- “Dashboard” and tablet visibility of status and configuration of all physical infrastructure elements;
- Proactive notifications and alarms of problems/pre-defined conditions, with detailed analysis and remediation suggestions;
- Automated work order generation (including multi-team work orders) and LED-guided fault-proof execution;
- Push-button tracking of assets and their dependencies in the data center;
- Support for planning, including simulation and analysis of the effect of potential changes; and
- Automated provisioning of new equipment and services.
The Result: Reduced Data Center Costs
According to Gartner, the use of IIMs can reduce operational costs by as much as 20-30% while decreasing downtime, accelerating service deployment, enhancing security and increasing overall control.
This is why next-generation IIM platforms are advocated by industry leaders as a new best-practice for all data centers, and especially for hard-to-manage mixed environments, because of the savings they generate and their ability to simplify inherently complex and hard-to-manage environments.
In fact, sometimes a single function such as device tracking to enable re-discovery of lost or “orphaned” equipment, can save the organization more than the cost of the whole IIM investment.
For example, a RiT customer with a mixed-topology environment that used PatchView+ to conduct an automated device-tracking study after the system was deployed, discovered that 40% of its available ports – and therefore the expensive array of switches associated with those ports – were not being used.
As with all IIM installations, once the IT staff are accustomed to using the system’s many error-reducing and planning-enhancement features, it becomes difficult to return to labor-intensive manual systems based on Excel sheets.
In conclusion, the advantages of next-generation IIM systems are especially beneficial to data centers considering an evolution to a mixed environment, in which infrastructure control and management become greater considerations than before.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:30p |
Data Center Jobs: McKinstry At the Data Center Jobs Board, we have a new job listing from McKinstry, which is seeking a Assistant Critical Facility Manager in South Jordan, Utah.
The Assistant Critical Facility Manager is responsible for performing managerial functions including hiring, coaching and separations, directing team to ensure successful achievement of business goals and process adhesion, coaching, mentoring and developing members of the team, including conducting goal setting worksheets and performance reviews, acting as steward of McKinstry culture; communicating and influencing policies and procedures, developing and managing training and staffing budget for development team, and supporting and assisting Critical Environments Facility Manager. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 2:00p |
IO Launches OpenStack Cloud on Open Compute Hardware  IO has launched IO.Cloud, which runs on Open Compute hardware, pictured above. In this IO.Cloud rack, Open Vault (Knox) storage units are seen above and below the three-wide Winterfell servers. (Photo: IO)
SAN JOSE, Calif. - IO’s top-to-bottom approach to the data center now extends to the cloud. The data center specialist has entered the cloud computing market with IO.Cloud, a new platform running on Open Stack software and Open Compute hardware.
IO is unveiling its new cloud today at the Open Compute Summit in San Jose, where CEO George Slessman will highlight the company’s membership in the Open Compute Project (OCP) in a keynote session. IO will use OCP servers and storage units to power its cloud, and its R&D team will be an “active participant” in the project.
Why enter the increasingly crowded cloud market? IO’s motivation is simple: offering a shorter path to the cloud for the hundreds of enterprise customers housed in its data centers. Rather than moving data across the Internet to public clouds from Amazon or other providers, these customers can now find the IO.Cloud just a cross-connect away.
“The enterprise market is eager for a cloud solution that both addresses security and avoids expensive lock-in to proprietary architecture,” said Slessman. “Our solution achieves both. IO.Cloud is IT that enables rather than constrains, using visibility and software-defined intelligence to spur innovation, drive growth and contribute to business success.”
OpenStack Meets Winterfell and Knox
IO.Cloud will initially be available in the company’s data centers in Phoenix, Arizona and Edison, New Jersey. IO’s cloud will be housed within IO.Anywhere data center modules, pre-fabricated enclosures built in the company’s factory near Phoenix. It will run on the Open Stack cloud software platform, powered by the Winterfell servers and Knox storage sleds developed through the Open Compute Project.
Existing IO Data Center as a Service (DCaaS) customers can access IO.Cloud over Layer 2 and 3 connectivity, the company said, offering additional security and cost savings.
With its push into cloud services, IO further extends its vision for standardized “Data Center 2.0” technology that is built in factories and managed by software. The company’s offerings now encompass modular data centers, colocation, its IO.OS software for data center infrastructure management (DCIM) software, and now cloud services.
Visibility Into VM Energy Efficiency
IO says its cloud will tie into those other services, using instrumentation and monitoring to provide data on every level of its customers’ infrastructure and IT operations.
“We have the ability to provide a virtual machine with the PUE next to it,” said IO Chief Innovation Officer Kevin Malik, referring to Power Usage Effectiveness, the leading energy efficiency metric for data centers. “You’ll be able to decide which VM to use based on region and real-time PUE and other metrics. It’s a whole new world. We can really define what it costs to run a VM.”
IO isn’t alone in combining Open Stack software and Open Compute hardware, a course previously charted by public cloud provider Rackspace Hosting.
But by combining these technologies within its pre-fabricated data center modules, IO hopes to demonstrate the merits of standard, repeatable design built atop IO.Anywhere.
IO believes its use of open technology will appeal to enterprise customers seeking transparency in their infrastructure. With a growing number of clouds using Open Stack, it also will be attractive to customers concerned about platform lock-in.
Malik said IO’s pricing will be “in line with the biggest players because we’re leveraging crowd-sourced hardware.”
In-House R&D as a Differentiator
IO maintains an in-house research and development operation, which sets the company apart from most colocation providers. That allowed IO to do its own customizations of OpenStack, the open source cloud framework.
“We did it all ourselves,” said Malik. “Having an R&D capability in a company like ours is exciting.”
For the IO.Cloud servers and storage, IO has partnered with AMAX, a Fremont, Calif. Company specializing in solutions based on Open Compute designs.
IO was founded in 2007 and has been a pioneer in the emerging market for modular data centers that are built in a factory using repeatable designs and can be shipped to either an IO data center or a customer premises. The company has built two data centers in the Phoenix area, along with facilities in Ohio, New Jersey and Singapore. In addition to its IO.Anywhere modular technology, the company has also developed IO.OS software for managing complex data center infrastructures across multiple sites. | | 3:00p |
Codero Gets $8M in Financing, Plans Dallas Expansion  A look at the cabinets inside a data center operated by Codero, The company has raised $8 million. (Photo: Codero)
Hosting provider Codero received $8 million in financing from Silicon Valley Bank and Farnam Street Financial, and will use the funds to expand its hybrid hosting product across all of its data center locations.
“We’ve looked at different sources and found debt is a lot less expensive,” said Emil Sayegh, president and CEO of Codero Hostng. “We’ve found a partner in Silicon Valley Bank that completely understands our model. They’re able to help us and fund our growth.”
Travis Wood, managing director for Silicon Valley Bank in Austin, said, “Helping innovators like Codero succeed is what we aim to do every day. With the flexibility provided by this financing, the Codero team is on a path to meet its ambitious global expansion goals.”
The company is operated out of data centers in Phoenix, Ashburn, and Chicago, but its next phase of growth will be in Dallas Forth Worth Area. “We signed on a long term lease, but we can’t say with who yet, only that it’s a state of the art facility,” said Sayegh. “It will be a flagship facility, offering cloud, dedicated and hybrid solutions.” The company is taking down a sizeable chunk of space in the new facility, and there is room to grow.”
Codero’s hybrid cloud computing platform allows customers to purchase infrastructure essentially through a “drag and drop” interface and pay for it via a credit card.
Codero’s CEO said that their cloud is “completely automated.” He noted, “There’s nobody running around the data center running cables. It’s all done natively at the switch.” The company has invested substantially in automation technology, hoping that in the long-term the investment will save money.
Codero is making sure all of its data centers are connected with dark fiber, and ultimately wants to extend its hybrid hosting platform to its customers’ data centers.
Also, the team is growing, as the company added COO and former Rackspace VP of Operations Robert Autenrieth recently. The former Racker architected the Dallas-Forth Worth expansion. The company just tripled its space at its headquarters in Austin as well.
According to Sayegh, the debt financing comes at a good time, with multiple initiatives and a big push to bring its hybrid hosting platform to all locations.
“Fast-growing companies like Codero need long-term relationships that deliver creative solutions,” said Dale Olsen, SVP of sales, Farnam Street Financial. “We feel that Farnam is uniquely positioned to help Codero maintain its high level of service while accelerating its revenue growth.” | | 3:26p |
Closer Look: IO’s Open Compute Cloud Hardware  A view of the Open Rack fans to exhaust heat from the Open Compute storage units supporting the new IO>Cloud deployment. (Photo: IO)
Data Center provider IO has has worked with AMAX to create Open Compute servers and storage to power its new IO.Cloud offering. Here’s a closer look at the hardware, which is housed inside IO.Anywhere data center modules in the company’s New Jersey and Phoenix data centers. See Closer Look: The Open Compute HArdware Powering IO.Cloud. | | 3:30p |
Converging the Modern Data Center Layers The modern data center is truly evolving. As the central hub for all current cloud technologies – the data center platform must continue to be efficient, scalable, and optimized. As more users become mobile and as the enterprise evolves – data center demands will continue to grow. Cloud computing, both public and private, is already impacting how many organizations implement aspects of their IT platform. The promise of higher utilization rates and dynamic resource sharing is driving cost-conscious businesses to review how well existing IT platforms are serving them. With that in mind, implementing private cloud blindly without a full understanding of the interdependencies between the virtual and physical worlds will lead to high risks. This has to include the physical data center facility itself, as power distribution, cooling and other environmental monitoring aspects are key to ensuring the high availability of a shared platform.
This whitepaper from nlyte Software shows how new technologies are poised to make a direct impact on both the current and future data center model. In fact, software-defined technologies are already creating direct optimizations for data center platforms. Specifically, software-defined data center technologies helps facilitate the much-needed move towards an optimized, balanced service level-to-cost ratio approach – away from safe but expensive over-provisioning. However, SDDC tends to focus on the logical layers, neglecting the physical, and so opening up the platform to possible major issues. The emergence of cross-functional and inclusive tools begs a more holistic approach in how the data center should be measured, monitored and managed. This brings up the topic of complete data center infrastructure management (DCIM).
Download this whitepaper today to learn about how:
- Private cloud platforms show promise – but also bring issues
- SDDC is only part of the answer
- DCIM pulls SDDC, private cloud, physical hardware and the data center facility together
- DCIM provides major cost advantages to an organization
- DCIM provides the insights a CIO needs to better advise the business
- An effective technology platform needs a combination of tooling
Remember, DCIM tools also provide the insights needed to provide solid ongoing value to the business through enhancing the lifecycle management of equipment, enabling better technical refreshes to be managed with little or no downtime to the business and to provide meaningful advice to the business as to what its options are when new workloads are needed to support the organization’s ongoing strategy. As the data center becomes more critical for the modern organization, key technologies like SDDC and DCIM will drive data center optimization. A truly efficient data center not only optimizes data control – it also positively impacts the end-user. | | 5:30p |
QLogic, LSI, Seagate Contribute Storage Technology to Open Compute  QLogic is contributing the QLogic QOE2562 to the Open Compute Project, making it the first Fibre Channel adapter for the project. (Photo: QLogic)
Leading technology providers are contributing storage technology to the Open Compute Project, which is holding its Open Compute Summit this week in San Jose, Calif.. Here’s a roundup:
QLogic
QLogic, a leading provider of Fibre Channel adapters, today announced the industry’s first Fibre Channel adapter specifically designed for use in Open Compute Project (OCP) servers. The QLogic QOE2562 8Gb Fibre Channel mezzanine adapter brings optimal security, maximum performance and enterprise-class reliability and manageability to OCP data centers. QLogic Fibre Channel OCP adapters are available with the OCP Certified Quanta STRATOS S215-X1M2Z.
“We see an increasing number of customers with Fibre Channel infrastructure who want to adopt Open Compute hardware,” said Mike Yang, general manager, Quanta QCT.”The partnership with QLogic allows us to provide secure, reliable Fibre Channel solutions that reduce energy consumption and maximize data center efficiency with Open Compute. Quanta QCT already delivers the industry’s best TCO for cloud service providers and enterprises adopting Open Compute, so this partnership is an ideal fit.”
“Investment banking, securities and financial service organizations are requiring scalable, high bandwidth, and low latency connectivity options for storage in OCP operations,” said Stu Miniman, principal research contributor, Wikibon. “As they evaluate OCP solutions, users looking for guaranteed and predictable latency from a proven data center network infrastructure now have the option of Fibre Channel for delivering performance and reliability for critical workloads.”
LSI
LSI Corporation is contributing two storage infrastructure reference designs to the Open Compute Project, the company said today.
“Open Compute is about the ability to scale computing infrastructure in the most efficient and economical way possible to achieve more work per dollar,” said Greg Huff, the Chief Technology Officer of LSI. “LSI storage solutions play an important role in the data center, and our technology can be found in every current contribution to the Open Compute Project. We’re excited to formally join the OCP and look forward to continued contributions to the community.”
LSI is contributing two storage infrastructure reference designs to the OCP. The first is a board design for a 12Gb/s SAS Open Vault storage enclosure. LSI is also contributing a design from its Nytro XP6200 series of a PCIe flash accelerator cards, which are purpose-built to meet the requirements of Open Compute and other hyperscale servers.
At the OCP Summit, LSI is demonstrating a proof-of-concept Open Compute “ready” Rack Scale Storage Architecture with Nebula, Inc., a leading enterprise private cloud.
Seagate
Two new development tools from Seagate Technology related to its Kinetic Open Storage platform have been accepted by the Open Compute Foundation, and will debut today at the OCP Summit.
“Seagate is committed to driving open source innovation and empowering an active ecosystem of system builders and software developers with the tools they need to deploy cutting-edge storage solutions and drive this truly revolutionary platform,” said Mark Re, Seagate’s chief technology officer. “A true game changer, the Kinetic Open Storage platform will provide this community with the ability to deliver innovative, first-of-their-kind, scale-out storage architectures at the industry’s lowest total cost of ownership.”
Building on its continued support of open source innovation, Seagate is making its Ethernet Drive interface specification and T-Card development adapter available to the Open Compute Project. Both Seagate Kinetic drive connector specifications will enable the OCP community to design, test, and deliver applications and system designs built upon the Kinetic Open Storage platform providing an easy, cost-effective path to object-based solution development at a reduced time to market.
“Seagate’s Kinetic technology has enabled us to further innovate on top of the Open Compute Project system design and allows us to increase storage density and reduce performance bottlenecks,” said Steve Ichinaga, senior vice president and general manager for Hyve Solutions. “This new open storage platform is another great example of the innovations arising from the Open Compute Project community allowing us to better address the needs of our scale-out storage cloud customers.”
Mellanox
Mellanox Technologies, Ltd. is offering its 40GbE NIC as a proposed contribution to the Open Compute Project, the company said. The ConnectX-3 Pro OCP-based 40GbE NICs with RDMA over Converged Ethernet (RoCE) and overlay network offloads offer optimized latency and performance for converged I/O infrastructures while maintaining extremely low system power consumption.
“Mellanox has played an important role in a number of OCP projects, including our new networking project,” said Frank Frankovsky, president and chairman of the Open Compute Project Foundation. “We’re pleased to see them propose the 40GbE OCP-based NIC as a contribution to OCP and look forward to collaborating with them further as we work to make open hardware a reality.” | | 7:16p |
Facebook: Open Compute Has Saved Us $1.2 Billion  Facebook CEO Mark Zuckerberg, at left, discusses the company’s infrastructure with Tim O’Reilly of O’Reilly Media yesterday at the Open Compute Summit in San Jose, Calif. (Photo: Colleen Miller)
SAN JOSE, Calif. - Over the last three years, Facebook has saved more than $1.2 billion by using Open Compute designs to streamline its data centers and servers, the company said today. Those massive gains savings are the result of hundreds of small improvements in design, architecture and process, write large across hundreds of thousands of servers.
The savings go beyond the cost of hardware and data centers, Facebook CEO Mark Zuckerberg told 3,300 attendees at the Open Compute Summit in San Jose. The innovations from Open Compute Project have saved Facebook enough energy to power 40,000 homes, and brought carbon savings equivalent to taking 50,000 cars off the road, Zuckerberg said.
The innovations created by Open Compute provide a major opportunity for other companies to achieve similar gains, he said
“People are spending so much on infrastructure,” said Zuckerberg, who discussed Facebook’s initiatives with technology thought leader Tim O’Reilly of O’Reilly Media. “It creates a pretty broad incentive for people to participate.”
An Open Legacy for Facebook
The focus on open hardware was a logical outgrowth of the way the company views the world, according to Zuckerberg.
“Facebook has always been a systems company,” he said. “A lot of the early work we did was with open systems and open source.”
The $1.2 billion in savings serves as a validation of Facebook’s decision to customize its own data center infrastructure and build its own servers and storage. These refinements were first implemented in the company’s data center in Prineville, Oregon and have since been used in subsequent server farms Facebook has built in North Carolina, Sweden and Iowa.
“As you make these marginal gains, they compound dramatically over time,” said Jay Parikh, Vice President of Infrastructure Engineering at Facebook.”We’re actually saving a ton of money.”
The designs for Facebook’s servers, storage units and data centers have since been contributed to the Open Compute Project, the non-profit community organization driving the open hardware movement.
This happened, Parikh said, because Facebook was willing to set aside the conventional ways that servers and data centers were built, and try something new. The Open Compute Project can extend these savings to other companies, he said.
“You really should be looking at OCP,” said Parikh. “You could be saving a lot of money. |
|