Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, May 8th, 2013
| Time |
Event |
| 12:31a |
Syria Cut Off From the Internet 
The war-torn country of Syria effectively disappeared from the Internet Tuesday afternoon, according to multiple monitoring services. The disruption, which appears to have affected all Internet traffic from the country, began about 2:45 p.m. Eastern time, which is about 9:45 p.m. in Syria.
The dropoff in Internet traffic is clearly visible in monitoring from Google (see chart above). It has been confirmed by Rensys, a leading network monitoring service. “Renesys confirms loss of Syrian Internet connectivity 18:43 UTC.BGP routes down, inbound traces failing,” the company tweeted.
The outage was quickly noted by Umbrella Security Labs in a blog post by CTO Dan Hubbard.
“Effectively, the shutdown disconnects Syria from Internet communication with the rest of the world,” wrote Hubbard. “It’s unclear whether Internet communication within Syria is still available. Although we can’t yet comment on what caused this outage, past incidents were linked to both government-ordered shutdowns and damage to the infrastructure, which included fiber cuts and power outages.”
It’s worth noting that this isn’t the first time this has happened. Syria was previously cut off from the Internet last November.
“Many Syria-watchers feared that the (November) Web shutdown was a precursor to some sort of coordinated regime counterattack or campaign; that President Bashar al-Assad had not wanted the world to see what he was about to do,” notes the Washington Post. “No such campaign ever appeared to come, however. Later, many Syria analysts concluded that the regime may have been seeking to hamper rebel communication.” | | 12:00p |
Iron Mountain is Taking the Data Center Underground  Racks of servers reside next to the limestone wall of an underground cave inside an Iron Mountain data center in the Underground in Boyers, Pa. (Photo: Iron Mountain)
After several years of quietly developing space in its massive underground facility in Pennsylvania, Iron Mountain is entering the data center business in a bigger way. The company has announced plans to build and lease data centers, offering both colocation services and wholesale suites to enterprise and government customers.
Iron Mountain is building out data center space within the Underground, its 145-acre records storage facility located 220 feet underground in a former limestone mine in Boyers, Pa., about 50 miles north of Pittsburgh. The facility has long been used for storing paper records and tape archives, and has an existing workforce of 2,700 employees, as well as its own restaurant, fire department, water treatment plant and back-up power.
But the Underground also offers a naturally low ambient temperature of 52 degrees, and has an underground lake that can be used to provide cool water for data center cooling systems, eliminating the expense of energy-hungry chillers. Iron Mountain developed a proof-of-concept facility known as Room 48, and has subsequently leased data center space to Marriott and several government agencies.
Leveraging its Corporate DNA
With the launch of Iron Mountain Data Centers, the company is seeking to leverage both the Underground facility and its existing document storage relationships with many of the nation’s largest IT users.
“We spent a lot of time looking at the data center market,” said Mark Kidd, senior vice president and general manager of data centers for Iron Mountain. “Most of today’s data center providers sell space. We’re packaging together services that will enable enterprises to outsource the ongoing management of their data center. We want to make it easier for enterprises to outsource. And our DNA in tracking information assets from creation to disposition is particularly differentiating for organizations that must comply with industry regulations. No one in today’s data center market has our track record in security and facilitating compliance.”
Kidd says Iron Mountain is building “several megawatts” of speculative technical space at the Underground to get its data center program rolling. Up to 10 megawatts of critical power is available, Kidd said. The facility currently has two carriers available, but will add two more within the next 90 days and expects to have six providers in the facility within 6 months.
“The fact that it is an active multi-tenant data center makes it pretty easy to get carriers in,” said Kidd of the 1.7 million square foot facility. “We are currently a living, breathing, enormous facility with lots of space to build out.”
The Data Bunker Goes Wholesale
Iron Mountain’s strategy will provide the largest test yet of the appetite for underground “data bunkers,” bringing scale and marketing muscle to a niche that has been largely limited to smaller providers. These “nuke proof” underground facilities are often based in caves or former telecom or military installations, and appeal to tenants seeking highly secure space, such as government agencies, financial services firms, and healthcare providers or other enterprises with high compliance requirements.
By bringing a wholesale offering into the data bunker space, Iron Mountain is bringing a name brand into the data bunker space, which may capture the interest of national customers considering underground space. The company is offering both retail colocation space and wholesale suites. Services being offered include engineering and design, development and construction, and ongoing facility operations and management.
In 2008, Marriott leased 12,500 square feet of space to establish a data center in the Underground for disaster recovery purposes.
“We have always had a rigorous and constant focus on having disaster preparedness in place,” said Dan Blanchard, vice president of enterprise operations at Marriott. “More than five years ago, we determined that we needed more flexibility and we got it. Today we have a data center that provides Marriott with a tremendous capability for disaster recovery, and we have a great partner in Iron Mountain.”
Looking Beyond the Underground
In the short term, Iron Mountain’s data center business will focus on the Pennsylvania facility. But the company realizes that a long-term data center strategy will need to include facilities in more than one market.
Kidd notes that Iron Mountain has the real estate portfolio to make that possible. The company operates 800 facilities, and owns about 40 percent of those sites. The company is in the process of converting to a real estate investment trust (REIT), a process it hopes to complete by the beginning of 2014.
A REIT is a corporation or trust that uses the pooled capital of many investors to purchase and manage income property. Income comes from the rent and leasing of the properties, and REITs are legally required to distribute 90 percent of their taxable income to investors. Three of the largest public data center developers – Digital Realty (DLR), DuPont Fabros (DFT) and CoreSite Realty (COR) – are organized as REITs.
 This underground lake provides chilled water for the cooling systems at Iron Mountain’s underground facilities in Boyer, Pa., which allows the facility to operate without chillers. (Photo: Iron Mountain) | | 12:30p |
DFT Building Massive New Data Center in Ashburn  An aerial view of DuPont Fabros Technology’s Ashburn Corporate Center, showing the five existing data centers and the future location of the new ACC7 facility. (Image: DuPont Fabros)
DuPont Fabros Technology has begun work on a huge new data center on its campus in Ashburn Corporate Center campus in northern Virginia, the company said Tuesday. The new ACC7 facility will be the largest project yet for the data center developer, with a whopping 41.6 megawatts of power.
The company has been signaling its intent to build additional space in northern Virginia for some time. Now that it has completely filled its ACC6 data center, DuPont Fabros Technology (DFT) sees the need to have additional capacity ready for its customers, which include some of the fastest-growing Internet companies.
“Leasing has been very strong at the Ashburn campus,” said Hossein Fateh, President and CEO of DuPont Fabros Technology. “Historically, as we announce a new building, a considerable amount of space gets pre-leased prior to delivery. Given the strength of this market, we have commenced development of 11.89 megawatts of ACC7, and expect it to be delivered in the second quarter of 2014.”
More Capacity, but Smaller Increments
The new data center will bring substantial new inventory online in one of the industry’s busiest markets. DFT has previously built its facilities in phases of 13 megawatts at a time. ACC7 will feature the first use of a new design that allows the company to add capacity in smaller chunks. For ACC7, the base “building block” will be 5.9 megawatts, with the first phase comprising two blocks of space.
Fateh says the new design can work in increments as small as 4.5 megawatts and still meet DuPont Fabros’ goals for return on its investment.
“We expect 3 major benefits from the new design,” said Fateh. “First, we expect to achieve a PUE of 1.2 directly benefiting our tenant’s overall expense structure. Second, moving to a single electrical ring bus provides more resiliency and help us decrease our development cost. Third, we’re now capable of delivering our products in smaller increments, which enables us to accurately match supply and demand while reducing the risk of CapEx spend and carrying costs.”
DuPont Fabros said it expects the construction of the conduit system and first 11.89 megawatts of capacity to cost between $7 million and $7.9 million per megawatt, for a total cost of $155 million to $160 million. | | 1:00p |
Showing IT Value through Proper Metrics Hani Elbeyali is a data center strategist for Dell. He has 18 years of IT experience and is the author of Business Demand Design methodology (B2D), which details how to align your business drivers with your IT strategy.
 HANI ELBEYALI
Dell
Even when IT investments are showing positive business results, how could you prove beyond any doubt that this positive impact was attributed to IT investments? Maybe the improvement is due to external economic factors, shift in market demands or weak competition strategy. Not only measuring and reporting the benefits to the business is fundamental, but also how and what to measure are equally as important. These correct metrics will support your efforts in realizing and sustaining IT value.
IT is Core to the Organization
Reporting to management on IT should not mean using IT jargon. This doesn’t help IT and can be counterproductive because the unfamiliar language causes the IT department to be seen as an outsider to the organization’s core.
Imagine that the CIO is walking into a board meeting, the head of the sales department reports on the execution strategy to meet the organization’s financial target, while the CMO reports on the globalization strategy to reach new customers, and the CFO report the financial health of the organization. Now comes the CIO’s turn to report on IT, and he/she starts by talking about the data center PUE ratio, the new ERP modernization project, the unified fabric offerings, and more abstractly, the cloud.
Immediately, everyone in the room puts their heads down, then looks at each other. There’s confusion everywhere – no one understands what this jargon really means. They are asking, “How does any of that IT stuff enable them to perform their business and compete? Why the CIO is even here?” And so on. This type of communication cements the notion that IT is not core to the organization; that it’s only supporting function. What exacerbates the issue is the attitude that if IT is not core to the organization, then, why not outsource it? Or out-task some functions at cheaper price, at minimum.
What to Measure and Communicate are the Real Issues
I certainly agree, showing IT’s value to the enterprise is challenging. The problem is not the value IT creates, but what to measure and how to communicate this value. Current practices in IT performance measurement, metrics and reporting do not help, because they concentrate on reporting how IT spends money, rather than the value created from the spending.
Businesses usually measure success in monetary values, profits and losses, and attainable financial targets. Investments in IT are made only in initiatives yielding positive return on investment (ROI). In many cases, IT projects are long term, the ROI comes in long payback periods, which is not attractive proposition to the business. Charge back and re-allocation makes things worst, because each line of business argues why they are paying too much. As a result, to change the perception of IT into business driver, you must stop reporting on hardware and software performance, start reporting the contribution of IT to the success of the business.
What Happens If IT Impacts On Business Are Reported Correctly
To best demonstrate the technology attribution to the business, and what executives are expecting to see when reporting to management, take a look at the report example provided in table 1.0, this example report1 is for a mid-size organization. Keep in mind this report is for illustration only; the intent is to shows what important factors relating to report on to the business unites for the past quarter.
Click to enlarge.
Communicating IT Value
The report sample above speaks the same language of the business and it reflects IT attribution to the business the past quarter:
- IT expense as percentage of revenue and gross income for the quarter, 6.3%. The IT organization ranked top 10% when compared to the IT industry.
- The contribution of IT helped grew the business by 14.9% and drive 26.7% top line revenue, while keeping its operating expenses flat, The IT organization ranked top 10% when compared to the IT industry.
- The report shows evidence of the value strategic technology investment can add to the business performance, even in downturn economy.
- What is important to point out is the business continues to squeeze out and shrink the wrong operating expense—IT is only 6.3% of the firm’s revenue.
The business only wants to know the impact of the IT performance on revenue, cost and margin. It’s the job of the IT leader to act as a gateway and ensure communications are translated properly in both directions.
Please note the opinions expressed here are those of the author and do not reflect those of his employer.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
1 This is an example report for illustration purposes only | | 1:30p |
I’ll Take the Cabinet With the Wide Screen, Please
“Does your datacenter cage have SportsCenter?” This photo tweet yesterday by Mark Imbriaco, who works on the Technical Operations team at GitHub, was too good not to share. Imbriaco, who has previously worked at LivingSocial, Salesforce.com, Heroku, 37Signals and AOL, has clearly seen more than few cages in his time, and knows the value of some customization.
We recently shared the trend toward worker-friendly amenities in newer data center projects. But Imbriaco’s tweet raises another aspect of this issue: what are the best ways to personalize space within your cages and data center suites? Share your favorites in our comments. | | 2:00p |
Riverbed Introduces Application Delivery as a Service Riverbed (RVBD) announced a new platform enabling any customer to deliver application delivery controller-as-a-service (ADCaaS) with the Stingray Services Controller. This new product will automate the deployment of application delivery services for any network architecture including software defined networking (SDN).
With the evolving application and data center architectures, workflows, and operations models, an “ADC per application” deployment model is made possible through the Stingray Services Controller. Riverbed’s ADCaaS enabling technology now gives cloud providers and enterprises deploying in the private cloud the ability to automatically provision, deploy, license, meter, and manage their ADC inventory in an as-a-service model.
Stingray Traffic Manager (STM) “micro” instances provide a consumption model for customers deploying ADC services. This eliminates the traditional throughput-based ADC sizing model that forces customers to guess their traffic load and pre-procure ADC capabilities in advance. With the STM “micro” instance, ADC services can now be elastically scaled on demand and right-sized to suit each application in the data center, offering high density, full isolation, and multi-tenancy scaling.
“With the emergence of the virtualized data center, legacy ADCs can be a bottleneck and were starting to be excluded from virtualization strategies and cloud deployments,” said Jeff Pancottine, senior vice president and general manager of the Riverbed Stingray application delivery business unit. “With Stingray Services Controller, customers will have a hyper-elastic ADC platform that can adapt to workload changes. This is a game changer – today we are introducing a software-defined application delivery fabric that enables Layer 7 services on top of any data center architecture.”
“Riverbed’s Stingray Services Controller and the Joyent high-performance cloud will enable our customers to provision, license, and scale ADC services in a very easy, agile, and cost effective way,” said Jason Hoffman, founder and chief technology officer, Joyent. “This ground-breaking, high-performance approach maps to our DNA and will enable us to deploy and manage ADCs in a truly elastic cloud delivery model.” | | 2:31p |
Designing for High Availability and System Failure This is the third article in a series on DCK Executive Guide to Data Center Designs.
In the world of mission critical computing the term “data center” and its implied and projected level of “availability” has always referred to the physical facility and its power and cooling infrastructure. The advent of the “cloud” and what constitutes “availability” of a “data center” may be up for re-examination.
Designing for failure and accepting equipment failure (facility or IT) as part of the operational scenario is imperative. As was discussed previously (see part 1, Build vs Buy), the ascending tier levels of power and cooling equipment redundancies can mitigate the impact of a facility based hardware failure. However, the IT architects are responsible for mitigating the overall availability of the IT resources, by means of redundant servers, storage and networks, as well as the software to monitor, manage and re-allocate and re-direct applications and processes to other resources in the event of an IT systems failure.
Traditionally there have been very little discussions or interactions between the IT architects and the data center facility designers regarding the ability IT systems to handle failover. As more enterprise organizations begin to visualize and utilize public and private cloud resources it may change the need for the amount of redundant IT resources located within any one single physical data center and create a logical redundancy shared among two or more sites. The ability to shift live computing loads across hardware and sites is not new and has been done many times in the past. Server clustering technology, coupled with redundant replicated data storage arrays has been available and successfully used for over 20 years. While not every application may failover perfectly or seamlessly yet, we cannot underestimate the long term importance of rethinking and including the ability of the IT systems to be part of our overall goal of availability, when making decisions about required redundancy levels of facility based infrastructure, required to meet the desired level of overall system availability.
The holistic approach to include an evaluation of the resiliency of the IT architecture in the “availability” design and calculations should be part and parcel of the overall business requirements when making decisions on regarding the facility tier level, number of physical data centers, as well as their geographic locations. This can potentially reduce costs and greatly increase overall “availability”, as well as business continuity and survivability during a crisis. Even basic decisions, such as how much fuel should be stored locally (i.e. 24 hours, 3 days a week for generator back-up), needs to be re-evaluated in light of recent events such as Super Storm Sandy which devastated the general infrastructure in New York City and the surrounding areas (see part 4, Global Strategies).
Ideally, the realistic re-assessment and analysis should be a catalyst for a sense of shared responsibility by both the IT and Facilities departments, as well as a catalyst for the re-evaluation of how data center “availability” is ultimately architected, defined and measured, in the age of virtualization and cloud based computing. These type of conversations and decisions must be motivated and made by the higher execute level of management.
Designing for an enterprise type of user owner data center is different than for a co-lo, hosting or cloud data center. Also the level of system redundancy does not have to exactly match the tier structure. Many sites have been designed with a higher level of electrical redundancy (i.e. 2N) while using an N+1 scheme for cooling systems. This is particularly true for sites that use individual CRAC units (which are autonomous), rather than a central chilled water plant.
Site Selection and Sustainable Energy Availability and Cost
The design and site selection process need to be intertwined. Many issues go into site section, such as geographic stability, power availability as well as climatic conditions, which will directly impact the type and design of the cooling system. (see part 2 – Total Cost of Ownership). Generally, the availability of sufficient power is near the top the first critical check list of site evaluation questions, as well as the cost of energy. However, in our present era of social consciousness of sustainability issues, as well as watchdog organizations such as Greenpeace, the source of the power is also an issue that has become a factor, based on the type of fuel used to generate the power, even if the data center itself is extremely energy efficient. Previously, those decisions were typically driven by the low¬est cost of power. Some organizations have picked locations based on the ability to purchase commercial power that has some percentage generation from a sustainable source. The Green Grid has defined the Green Energy Coefficient (GEC), which is a metric that quantifies the portion of a facility’s energy that comes from green sources.
In other cases, some high profile organizations have built new leading edge data centers with on-site generation capacity such as fuel cell, solar and wind, to partially offset or minimize their use of less sustainable local utility generation fuel sources, such as coal. While this would impact the TCO economics, since it requires a larger upfront capital investment, however there may be some local and government tax or financial incentives available to offset the upfront costs. Nonetheless, while this option may not be practical for every data center, green energy awareness is increasing and should not be ignored.
The complete Data Center Knowledge Executive Guide on Data Center Design is available in PDF complements of Digital Realty. Click here to download. | | 3:00p |
EMC Launches Data Protection Suite, Software Enhancements Day two of the EMC World conference in Las Vegas began with a session from EMC Chairman and CEO Joe Tucci, and Pivotal CEO Paul Maritz. EMC announced a Data Protection Suite, made updates to its management software suites, and enhanced its Syncplicity file sync and sharing solution.
Data Protection Suite
EMC announced a flexible approach to EMC backup and archive solutions with the EMC Data Protection Suite. It has a flexible licensing model that allows customers to mix and match the usage of individual products to best fit their requirements. The EMC Protection Storage Architecture leverages consolidated protection storage as a repository for data, provides integration across the IT environment, and is tied together with consolidated data management as a way to deliver a catalog of data protection services.
“As the world’s largest backup provider, we have to innovate to retain our clients’ patronage,” said Guy Churchward, President of the EMC Backup Recovery Systems Division. “This comes in the form of not just technology but also the consumption model. The EMC Data Protection Suite is an excellent way for customers to harness the benefits of our broad portfolio with a simple, flexible approach without compromise.”
Enhanced Management software suites
EMC announced enhanced management software suites designed to provide transparency into storage, network, and compute infrastructures, including new and tight integration with EMC ViPR. The new EMC Service Assurance Suite and updates to the EMC Storage Resource Management Suite share a common presentation layer with ViPR, which allows operations and storage teams to visualize, analyze, and optimize their infrastructure.
“The enablement of IT optimization and analytics afforded by Service Assurance Suite and Storage Resource Management Suite are the result of unprecedented transparency — transparency that only comes with powerful software and a simplified management layer,” said Bob Laliberte, Senior Analyst at ESG.
The new Service Assurance Suite features reports on availability, performance and configuration management, and cross-domain management analysis. It has been updated with new Dashboard and Explore Views for VNX environments to enhance file reporting and end-to-end relationship and topology visualization.
EMC Syncplicity enhanced
EMC announced increased storage flexibility and control for EMC Syncplicity enterprise file sync and sharing solution. The Syncplicity policy-driven hybrid cloud will allow customers to utilize both private and public clouds simultaneously, automatically optimize storage utilization and performance, and adhere to security and regulatory compliance requirements based on user and content types.
Syncplicity offers storage deployment options by selecting either a private cloud deployment through EMC Isilon Scale-out NAS or EMC Atmos object-based storage, or a public cloud option to store user files and version history and sync them across all their devices. The policy-driven hybrid cloud approach is critical because enterprises need a way to utilize different cloud deployment models to optimize storage utilization based on the file security and user requirements | | 3:30p |
Teradata Leverages In-Memory Technology For Big Data Teradata (TDC) introduced Intelligent Memory, a new database technology that creates extended memory space beyond cache that significantly increases query performance and enables organizations to leverage in-memory technologies with big, diverse data.
“The introduction of Teradata Intelligent Memory allows our customers to exploit the performance of memory within Teradata Platforms, which extends our leadership position as the best performing data warehouse technology at the most competitive price,” said Scott Gnau, president Teradata Labs. “Teradata Intelligent Memory technology is built into the data warehouse and customers don’t have to buy a separate appliance. Additionally, Teradata enables its customers to buy and configure the exact amount of in-memory capability needed for critical workloads. It is unnecessary and impractical to keep all data in memory, because all data do not have the same value to justify being placed in expensive memory.”
Intelligent Memory is a part of the overall Unified Data Architecture strategy, which leverages Teradata, Teradata Aster, and open source Apache Hadoop. It manages the data by predictively placing the “hottest” or most frequently used data into memory, then automatically updating and synchronizing it. Access to data in-memory eliminates disk I/O bottlenecks and query delays, and increases system throughput.
Intelligent Memory uses algorithms that automatically age, track, and rank data to ensure effective data management and support for user queries. Data can be stored and compressed in columns and rows, which maximizes the amount of data in the memory space. Teradata Intelligent Memory places only the hottest data to the new extended memory space.
“Teradata’s new in-memory architecture is integrated with its management of data temperature,” said Richard Winter, chief executive officer, WinterCorp. “This is very significant, because the hottest data will migrate automatically to the in-memory layer -Teradata Intelligent Memory; the next hottest data will move automatically to solid state disk; and, so on. Teradata also provides the column storage and data compression that amplify the value of data in memory. The customer sees increased performance without having to make decisions about which data is placed in memory.” | | 6:31p |
Fusion-io Shares Plunge After CEO Departs Shares of flash memory specialist Fusion-io plunged more than 20 percent today after the company announced that CEO and President David Flynn and Chief Marketing Officer Rick White had departed to “pursue entrepreneurial investing activities.” Board member Shane Robison has been named Chairman, CEO and President.
Flynn’s sudden departure didn’t sit well with securities analysts, who noted that its been just two weeks since Fusion-io announced a $100 million acquisition. Analysts also fretted about whether Flynn’s exit would impact the company’s business with Facebook, which along with Apple is Fusion-io’s largest customer. Several analysts suggested Flynn had been a key player in the relationship with Facebook.
Shares of Fusion-io were off $3.96 to $14.04, a drop of 22 percent, in afternoon trading on the New York Stock Exchange.
Robison, 59, has more than 30 years of experience in management roles with some of the world’s leading technology companies, including AT&T Labs, Cadence Design Systems and Apple. He most recently served as Executive Vice President and Chief Strategy and Technology Officer of HP from May 2002 until November 1, 2011. | | 7:09p |
Open Compute Will Begin Building Network Switches  A look at some of the network cabling in a Facebook data center. Facebook will lead an effort by the Open Compute Project to develop an open top-of-rack network switch. (Photo: Facebook)
In a move that will likely accelerate the shakeup in the networking sector, the Open Compute Project said this week that it will expand its “open source hardware” initiative to include network switches. The project, which was founded by Facebook to promote standardized hardware for web-scale data centers, has led to rapid innovation in the server market and has also developed a storage offering.
The announcement is the largest step yet in extending the open source hardware movement to networking, a sector which has been dominated by a handful of large vendors offering routers and switches managed by proprietary software. It follows several years of progress in the development of software to support open networking, especially in the use of software-defined networking (SDN) that allows network equipment to be managed by external devices (typically commodity servers).
The Open Compute Project (OCP) said it has formed a team to work on a specification for an “OS-agnostic” top-of-rack switch. Najam Ahmad, who runs the network engineering team at Facebook, will lead the project, with participation from the two standards groups – the Open Networking Foundation, and OpenDaylight – as well as as well as Big Switch Networks, Broadcom, Cumulus Networks, Facebook, Intel, Netronome and VMware.
Goals: Innovation, Efficiency, Freedom
“It’s our hope that an open, disaggregated switch will enable a faster pace of innovation in the development of networking hardware; help software-defined networking continue to evolve and flourish; and ultimately provide consumers of these technologies with the freedom they need to build infrastructures that are flexible, scalable, and efficient across the entire stack,” the OCP’s Jay Hauser wrote in a blog post. “This is a new kind of undertaking for OCP — starting a project with just an idea and a clean sheet of paper, instead of building on an existing design that’s been contributed to the foundation — and we are excited to see how the project group delivers on our collective vision.”
The announcement provides a reminder of how the pace of innovation has accelerated in the data center world. It’s been less than three years since Amazon’s James Hamilton predicted huge changes in networking in a keynote at the Velocity conference. Here’s an excerpt from our coverage at the time:
“We’re very close to a fundamental change in the networking world,” said Hamilton, who said the industry is beginning to look beyond tightly integrated vendor offerings. He envisions a future in which data center operators can more easily mix and match hardware and software from disparate sources, including open source offerings. “We’ll get our Linux of the networking world,” Hamilton said.
There’s been great progress on the software side of the business. Open Compute has quickly built an ecosystem of hardware companies developing designs based on its efforts. In less than two years, the OCP has grown beyond its origins as a showcase for Facebook’s design innovations, evolving into an active community building cutting-edge server and storage hardware, disrupting the traditional IT supply chain in the process. If similar progress occurs with the OCP effort on a top-of-rack switch, the networking sector will soon get even more interesting and competitive. |
|