Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, April 10th, 2013

    Time Event
    12:30p
    Designing for Dependability in the Cloud

    David Bills is Microsoft’s chief reliability strategist and is responsible for the broad evangelism of the company’s online service reliability programs.

    David BillsDAVID BILLS
    Microsoft

    This article builds on the previously published articles in this series, “Designing for Dependability in the Cloud” and Microsoft’s Journey: Solving Cloud Reliability With Software. In part three, I discuss the cultural shift and evolving engineering principles Microsoft is using to help improve the dependability of the services we offer and help customers realize the full potential of the cloud.

    From the customer’s perspective, cloud services should just work. But, as we’ve discussed throughout this series, service interruption is inevitable — it’s not a matter of if, it’s strictly a matter of when. No matter how expertly online services are designed and built, unexpected events can — and will — occur. The differentiator is in how service providers anticipate, contain, and recover from these kinds of situations. We need to protect the customer experience from these inevitabilities.

    Guiding Design Principles

    There are three guiding design principles for cloud services: 1) data integrity, 2) fault tolerance, and 3) rapid recovery. These are three attributes that customers expect, at a minimum, from their service. Data integrity means preserving the fidelity of the information that customers have entrusted to a service. Fault tolerance is the ability of a service to detect failures and automatically take corrective measures so the service is not interrupted. Rapid recovery is the ability to restore service quickly and completely when a previously unanticipated failure occurs.

    As service providers, we have to try to identify as many potential failure conditions as possible in advance, and then account for them during the service design phase. This careful planning helps us decide exactly how the service is supposed to react to unexpected challenges. The service has to be able to recover from these failure conditions with minimal interruption. Though we can’t predict every failure point or every failure mode, with foresight, business continuity planning, and a lot of practice, we can put a process in place to prepare for the unexpected.

    Cloud computing can be characterized as a complex ecosystem consisting of shared infrastructure and loosely-coupled dependencies, many of which will be outside the provider’s direct control. Traditionally, many enterprises maintained on-premise computing environments, giving them direct control over their applications, infrastructure, and associated services. However, as the use of cloud computing continues to grow, many enterprises are choosing to relinquish some of that control to reduce costs, take advantage of resource elasticity (for example, compute, storage, networking), facilitate business agility, and more effective use of their IT resources.

    Understanding the Team’s Roles

    From the service engineering teams’ perspective, designing and building services (as opposed to box products, or on-premises solutions) means expanding the scope of their responsibility. When designing on-premises solutions, the engineering team designs and builds the service, tests it, packages it up, and then releases it along with recommendations describing the computing environment in which the software should operate. In contrast, services teams design and build the service, and then test, deploy, and monitor it to ensure the service keeps running and, if there’s an incident, ensure it is resolved quickly. And the services teams frequently do this with far less control over the computing environment the service is running in!

    Using Failure Mode and Effects Analysis

    Many services teams employ fault modeling (FMA) and root cause analysis (RCA) to help them improve the reliability of their services and to help prevent faults from recurring. It’s my opinion that these are necessary but insufficient. Instead, the design team should adopt failure mode and effects analysis (FMEA) to help ensure a more effective outcome.

    FMA refers to a repeatable design process that is intended to identify and mitigate faults in the service design. RCA consists of identifying the factors that resulted in the nature, magnitude, location, and timing of harmful outcomes. The primary benefits of FMEA, a holistic, end-to-end methodology, include the comprehensive mapping of failure points and failure modes, which results in a prioritized list of engineering investments to mitigate known failures.

    FMEA uses systematic techniques developed by reliability engineers to study problems that might arise from the malfunctions of (complex) systems. The possible problems are studied to understand the effects of the malfunctions by assessing severity, frequency of occurrence, and the ability to detect them, to prioritize the engineering investment required to cope with those malfunctions based on the risks they represent.

    The FMEA process has five key steps.

    FMEA-Key-Step

    Figure 1.0 FMEA key steps

    12:30p
    How Can You Save Up to 30% in Data Center Operation Costs?

    With more users, more devices and many more connections coming into the cloud – the data center has become an integral part of any IT infrastructure. There is much more reliance on data center operations and there is a direct need for optimal efficiency.

    This leads to an increased demand for controlling data center components that goes far beyond hardware management and advanced cooling systems. The complexity of the modern environment calls for a more holistic energy optimization solution.

    Accurate monitoring of power consumption and thermal patterns creates a foundation for enterprise-wide decision making with the ability to:

    • Monitor and analyze power data by server, rack, row, or room;
    • Track usage for logical groups of resources that correlate to the organization or data center services;
    • Automate condition alerts and triggered power controls based on consumption or thermal conditions and limits; and
    • Provide aggregated and fine-grained data to web-accessible consoles and dashboards, for intuitive views of energy use that are integrated with other data center and facilities management views.
    Rich-Miller-Staff-tnRICH MILLER
    Editor-in-Chief

    Moderated by Data Center Knowledge Editor-in-Chief Rich Miller, this webinar will feature an in-depth conversation with Intel’s Jeff Klaus, Director of Data Center Manager (DCM) Solutions. The webinar will revolve around specific use cases as they apply to real data center situations. For example, in a jointly tested POC conducted over a three-month period in late 2011 at Korea Telecom’s existing Mok-dong Data Centre in Seoul, South Korea, results showed that a Power Usage Effectiveness (PUE) of 1.39 would result in approximately 27 percent energy savings. This could be achieved by using a 22◦C chilled water loop.

    Jeff-Klaus-smJEFF KLAUS
    Intel

    The 60-minute online conversation will explain Data Center Infrastructure Management (DCIM) and its contributions to managing power and cooling usage in the data center. For example, identifying temperatures at the server, versus at the room or even rack levels, can help data center managers more accurately understand what the real ambient temperature should be for individual servers to have optimal lifespans. Register today to join Rich Miller of Data Center Knowledge and Intel’s Jeff Klaus on April 25, 2013 (2:00pm-3:00pm EDT) to learn how these types of assessments can represent a significant savings in data center environment management.

    1:00p
    Intel Updates Processor Roadmap for 2013

    intel-atom-s12x9

    Intel has a busy year of new processor product rollouts planned, as it continues to update its chips to take advantage of technology innovations. During her keynote at the Intel Developer Forum today in Beijing, Diane Bryant, senior vice president and general manager of Intel’s Datacenter and Connected Systems Group, unveiled details of upcoming Intel products targeting the server, networking and storage requirements of the data center.

    The company will refresh its Intel Xeon and Atom processor lines with new generations of 22nm manufactured products. In coming months Intel will also begin production of new Intel Atom and Xeon processor E3, E5 and E7 families, featuring improved performance per watt and expanded feature sets, Bryant said.

    Intel Atom for the Data Center

    In December 2012, Intel launched the Intel Atom processor S1200 product family. Today, Intel revealed details of three new low-power SoCs (system on chip) for the data center, all coming in 2013.

    • Intel Atom Processor S12x9 product family for Storage. Intel announced the availability of the low-power Intel Atom processor S12x9 family for storage deployments. This SoC shares several features with the Intel Atom S1200 processor product family, but contains technologies specifically geared for storage devices. With up to 40 lanes of integrated PCIe 2.0, or physical paths between I/O and the processor, the capacity demands of multiple devices can be handled more efficiently. Of the 40 lanes of PCIe* 2.0, there are 24 Root Port lanes and 16 Non Transparent Bridge lanes, for failover support.
    • Avoton. In the second half of 2013, Intel will deliver the second generation of 64-bit Intel Atom processor for microservers, codenamed “Avoton.” Built on Intel’s leading 22nm process technology and new microarchitecture “Silvermont,” Avoton will feature an integrated Ethernet controller and expected to deliver significant improvements in performance-per-watt. Avoton is now being sampled to customers and the first systems are expected to be available in the second half of 2013.
    • Rangeley. Intel will expand its presence in the network and communications infrastructure market by delivering an Intel® Atom™ processor based SoC codenamed “Rangeley,” also built on the 22nm process technology. Rangeley aims to provide an energy-efficient mechanism for processing communication workloads and is targeted for entry level to mid-range routers, switches and security appliances. Rangeley is targeted to be available in second half of 2013.

    Intel Xeon processor E3 Family

    This year, Intel will introduce the new Intel Xeon processor E3 1200 v3 product family, based on Haswell architecture. Intel continues to lower the power levels on the Intel Xeon processor E3 family; the lowest TDP will be 13 watts, approximately up to 25 percent lower than the prior generation. The improvement from eight transcode to 10 transcode with Haswell’s graphics capabilities also results in up to 25 percent improvement in transcode performance per watt for hardware accelerated media performance.

    Intel Xeon processor E5 Family

    Intel’s next-generation Intel Xeon processor E5 family will be based on the 22nm manufacturing process, and will be available in the third quarter of this year. These processors will support Intel Node Manager and Intel Data Center Manager Software. Security will also be improved with Intel Secure Key and Intel OS Guard which provide additional hardware-enhanced security  Intel OS Guard protects against privilege attacks by preventing malicious code from executing out of application memory space, in addition to data memory.

    Intel Xeon Processor E7 Family

    To support in-memory analytics and rapidly respond to scaling data sets, Intel is on-track for production availability of the next-generation Intel Xeon processor E7 family in the fourth quarter of 2013. Featuring triple the memory capacity – up to 12 Terabytes (TB) in an eight-socket node – this processor is ideal for data-demanding, transaction-intensive workloads such as in-memory databases and real-time business analytics.

    • With the Intel Xeon processor E7 family, Intel is also announcing Intel® Run Sure Technology which will deliver greater system reliability and increased data integrity while minimizing the downtime for businesses running mission-critical workloads. These RAS features will be available with the next-generation Intel Xeon processor E7 family, and will be comprised of Resilient System Technologies, Resilient Memory Technologies.
    • Resilient System Technologies includes standardized technologies integrating processor, firmware and software layers, including the OS, hypervisors and databases to allow the system to recover from previously fatal errors.
    • Resilient Memory Technologies includes features to help ensure data integrity and enable systems to keep running reliably over a longer period of time, reducing the need for immediate service calls.
    2:30p
    Adapt Your Technology, Services and Organization to Cloud

    Let’s face it – the cloud is here and cloud computing is only going to continue to evolve. Many organizations are either looking at some type of cloud solution or have already jumped in. The reality is that the cloud can be a very powerful platform which can bring numerous benefits to your organization. They key to creating that powerful cloud environment would include deploying the right model, involving the right technologies, and having the right business case.

    In doing this alone, you may face various challenges in terms of best practices, services and just general knowledge around the cloud. This is where cloud innovators can offer some help. HP’s Converged Cloud Workshop helps you gain clarity on your cloud strategy, identify the cloud initiatives that can work for your business, and create a roadmap that defines your steps forward. It is an exploration and expansion of your understanding of the cloud computing model and how it fits in with, and changes the dynamics of, traditional IT solution sourcing.

    HP2

    [Image source: HP - Adapt your technology, services, and organization to cloud]

    In this white paper, you learn not only about the evolution of the cloud, but you are also able to see how a strategic cloud workshop can set your organization on the right path towards the cloud. The workshop involves discussing a range of cloud computing concepts with customers, followed by questions about where they see themselves in their current as well as future cloud computing plans. The Converged Cloud Workshop works through several cloud elements including:

    • Applications
    • Facilities
    • Infrastructure
    • Security
    • Reliability

    Download HP’s white paper today to see how the Converged Cloud Workshop can help your organization integrate the cloud directly with your business and growth plans. In working with cloud computing, it’s always important to involve partners who can help guide the way. By partnering with HP and using their workshop, an organization can better understand the direct impact and fit that cloud computing can have.

    2:32p
    NBC Sports To Use Windows Azure Media Services

    At this year’s National Association of Broadcasters (NAB) show, Microsoft (MSFT) announced that it is partnering with NBC Sports Group to use Windows Azure Media Services across NBC Sports’ digital platforms. Through the agreement, which rolls out this summer, Microsoft will provide both live-streaming and on-demand viewing services for more than 5,000 hours of games and events on devices, such as smartphones, tablets and PCs.

    Previously at the 2012 NAB event, Microsoft  announced its Media Services cloud platform, with a Broadcast Reference Architecture.

    “Microsoft is constantly looking for innovative ways to utilize the power of the cloud, and we see Windows Azure Media Services as a high-demand offering,” said Scott Guthrie, corporate vice president at Microsoft. “As consumer demand for viewing media online — on any available device– grows, our partnership with NBC Sports Group gives us the opportunity to provide the best of cloud technology and bring world-class sporting events to audiences when and where they want them.”

    Microsoft is working with iStreamPlanet Co. and its live video workflow management product Aventus to integrate with Windows Azure Media Services, and provide a scalable, reliable, live video workflow solution. This will enable NBC Sports Group to bring its portfolio of properties to the cloud. These properties include the Sochi 2014 Winter Olympic Games, “Sunday Night Football,” Notre Dame Football, Premier League soccer, Major League Soccer, Formula One and IndyCar racing, PGA TOUR, U.S. Open golf, French Open tennis, Triple Crown horse racing, and more.

    “NBC Sports Group is thrilled to be working with Microsoft,” said Rick Cordella, senior vice president and general manager of digital media at NBC Sports Group. “More and more of our audience is viewing our programming on Internet-enabled devices, so quality of service is important. Also, our programming reaches a national audience and needs to be available under challenging network conditions. We chose Microsoft because of its reputation for delivering an end-to-end experience that allows for seamless, high-quality video for both live and video-on-demand streaming.”

    4:00p
    Managing the Data Center – One Rack at a Time

    The modern data center is a combination of technologies all working together to help deliver data. As cloud computing, IT consumerization and big data continue to shape the industry – the data center will continue to remain at the heart of it all. In working with today’s data center infrastructure, administrators must not only build around agility – but efficiency as well. The idea is to simplify data center management by breaking a complex environment into more manageable pieces – the racks. In working with rack technologies, there needs to be tools in place that can understand space, power and cooling. These tools can assist the data center manager in areas such as asset management, real-time monitoring, capacity planning, and process management.

    In simplifying the data center into rack components, administrators are able to better gather and quantify metrics within their infrastructure. For example:

    • Rack – total rack power
    • Rack PDU – power at the rack PDU
    • Device – power consumed by the IT device

    Furthermore, there are direct benefits in managing inventory within the rack environment. At the rack level, there are three primary resources which, as the white paper outlines, must be considered when trying to determine whether the rack can support a new asset:

    • Is there enough contiguous space to house the asset?
    • Is there sufficient redundant power for the asset?
    • Is there enough cooling to remove the heat generated by the asset?

    By using intelligent monitoring tools, not only are you able to answer questions around space, power and cooling – you’re also creating a smarter rack. With connections to existing rack management hardware, the intelligent cabinet can provide the following advanced functionality:

    • Asset management – location of all rack assets down to the rack unit
    • Power management – rack PDU and overall rack power management
    • Rack security – access monitoring and lock control via badge or keypad
    • Environmental monitoring – temperature, humidity, air flow, and other sensors
    • KVM – access to rack devices through a KVM switch
    • Touch screen and keyboard at the front of the rack

    As the data center continues to evolve – there will be a greater need to work with intelligent tools that can analyze the complex relationships between space, power and cooling. Download this white paper from No Limits Software to learn how smart software tools can create smarter data centers.

    4:00p
    NREL To Use Hot Water Cooling From Asetek
    Photo shows a server tray using Asetek's Rack CDU Liquid Cooling system. The piping system connects to a cooling distribution unit. (Source: Asetek)

    Photo shows a server tray using Asetek’s Rack CDU Liquid Cooling system. The piping system connects to a cooling distribution unit. (Source: Asetek)

    The U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) looks to join the Department of Defense in utilizing “hot water” liquid cooling, as a retrofit for its Skynet HPC cluster. Asetek announced that NREL will install its RackCDU (Rack Coolant Distribution Unit) direct-to-chip liquid cooling system, as the cluster is relocated to the new data center at the Energy Systems Integration Facility (ESIF) in Golden, Colorado.

    Last year, NREL had been studying the energy efficiency performance, savings, lifecycle cost, and environmental benefits of RackCDU for potential broader adoption across the DoD. At the ESIF data center, warm water (75 degrees) liquid cooling will be used to operate servers and to recover waste-heat for use as the primary heat source for the building office space and laboratories. The higher liquid temperatures used by Asetek’s RackCDU (105F) will improve waste-heat recovery and reduce water consumption for the data center.

    By retrofitting an existing air-cooled HPC cluster with RackCDU, NREL will reduce the cooling energy required to operate this system, reduce water usage in the cooling system and increase the server density within the cluster, reducing floor-space and rack infrastructure requirements. The system will be installed as a drop-in retrofit to existing air-cooled servers and racks.

    “Ambient water temperature in the hydronic system is a critical factor in data center efficiency and sustainability,” said Steve Hammond, director of the Computational Science Center at NREL.  “Starting with warmer water on the inlet side can create an opportunity for enhanced waste-heat recovery and reduced water consumption, and in many locations can be accomplished without the need for active chilling or evaporative cooling, which could lead to dramatically reduced cooling costs.”

    The new Energy Systems Integration Facility is located at the  NREL’s campus in Golden. The data center is set to complete construction this summer.

    4:17p
    Google Invests $390 Million to Expand Belgium Facility
    View of sunset over the exterior of Google's data center in St. Ghislain, Belgium. (Photo: Google)

    View of sunset over the exterior of Google’s data center in St. Ghislain, Belgium. (Photo: Google)

    Google continues to make big infrastructure investments, in this case in a key facility powering European services. The company is investing 300 million Euros ($390 million in U.S. dollars) to expand its data center in Belgium. Its the latest in a series of expansion announcements for Google, which sees its data centers as the technology engine powering its online search and advertising platform.

    In January, we noted the company had poured $1 billion U.S.D. into its data centers in a period of three months. That investment comes after other major funding went to multiple data centers such as an additional $600 million in North Carolina, bringing Google’s total investment there to over $1.2 billion. Last year, the company’s investment in Iowa passed the $1 billion mark. The year before, there was a $600 million expansion in Oklahoma. Google also recently unveiled its first data center project in South America, which will be located in Quilicura, Chile.

    The Belgian facility in in St. Ghislain, southwest of Brussels, is the underpinning of Google’s services such as search, gmail, and Youtube in Europe. The center currently has approximately 120 employees and the facility is touted as a highly energy efficient. Google also operates data centers catering to the European market in Ireland and Finland. The Hamina data center in Finland received $184 million in investment last year.

    Hallmark of Belgium Data Center is Efficiency

    The climate in Belgium supports free cooling almost year-round, and the facility is chiller-less. The facility is “water self-sufficient,” as it is draws water from a nearby industrial canal and has built a 20,000-square-foot water treatment plant  to prepare the canal water for use in the data center. This is among the reasons why the facility is a top performer when it comes to energy efficiency, hitting a Power Usage Effectiveness (PUE)  of 1.11 over a 12-month average in 2011. For more details on how Google runs without chillers, see Google’s Chiller-Less Data Center for our coverage of engineering prowess behind the concept.

    Google also allows the ambient temperature in data halls to rise in its Belgium facility, with humans working there staying within climate-controlled sections of the building for the most part. For the majority of the year, it’s cool enough to where this design works with no problems, but when it heats up in Belgium, the company participates in “excursion hours.” Indoor temperatures can rise above 95 degrees, and the humans leave the server area. This rarely occurs, and the machines work just fine – only it’s uncomfortable for humans.

    6:43p
    Dell Create Cloud Service Aims at Media Crowd

    The 2013 National Association of Broadcasters show in Las Vegas this week brought news from Dell, Avere and Arista Networks.

    ‘Dell Create’ Cloud Service Will Cater to Media & Entertainment Clients. Dell announced Dell Create, a multi-vendor cloud service for content creators, designed to help large broadcast companies, studios, creative shops and other media and entertainment customers dramatically improve their content workflows with a centralized IT environment. The process for Dell Create includes getting an understanding of the customer environment, recommending the best workflow, and then developing a recommendation for an up-to-date networked infrastructure based on customer direction. “Dell Create is based on direct customer feedback and the pain points those customers experience in the workflow process,” said Chad Andrews, Dell Media & Entertainment vertical strategist. “Dell Create offers customers a COMPASS (Collaborative Multi-vendor Platform-as-a-Service) computing model that enables customers to benefit from an ecosystem of best-of-breed vendors that share a pool of common technical resources, dramatically reducing costs and eliminating redundancy.” At NAB 2013, Dell is showcasing its portion of the StudioXperience Sponsored by Intel, where a number of media technology providers have embraced the Dell Create vision.

    Avere unveils hybrid storage. Avere Systems unveiled its next generation FXT 3800 hybrid Edge filer, which contains both Flash/Solid State Drive (SSD) media and Serial Attached SCSI hard drives (SAS HDD) and delivers significant performance gains in benchmark testing. With this new hybrid technology, Avere can now automatically tier data across four media types: RAM, SSD, SAS and SATA HDDs, delivering maximum performance for the hottest files, while at the same time moving cold data out of the performance tier and onto SATA to minimize costs and shrink the data storage footprint. “With the new FXT 3800, Avere continues to be on the cutting edge of file system storage innovation and gives companies a new way to think about the way they purchase data storage,” said Benjamin Woo, analyst with Neuralytix. “Customers can now receive the greatest amount of flexibility and choice by leveraging all four media tiers of storage, while defining the performance and efficiency requirements based on the activity of the data.” The Avere FXT 3800 Edge filer contains 144GB of DRAM, 2GB NVRAM and 800GB of SSD to accelerate the read, write and metadata performance of most active data.

    Arista selected by EditShare.  Arista Networks  announced that EditShare, Inc. will incorporate Arista 7050T-52, 10GBASE-T, low-latency switches for its networked shared storage architecture and collaborative editing solutions. “With 10GbE it’s like night and day – workstations connecting to EditShare architectures through the Arista 1G/10GBASE-T switches transfer files in a snap” said Andy Liebman, CEO and Founder of EditShare. The 7050T-52 Series switches provide 48 ports of 100/1000/10GBASE-T to server and storage nodes and will support existing connection to 1GbE workstations and to those who migrate to 10GbE.

    7:30p
    Intel Continues to Rethink the Rack of the Future

    intel-scorpio

    Intel is continuing to advance its vision for new data center designs that rethink the traditional placement of components within the server and rack. After unveiling a prototype at the Open Compute Summit in January, Intel today offered details on a similar initiative by China’s largest Internet companies, and promised to release reference designs to help OEMs and end users deploy these designs.

    Intel executive Diane Bryant today shared an overview of the company’s vision for “rack scale architecture” that would break apart elements of mass-market servers. Bryant, senior vice president and general manager of Intel’s Datacenter and Connected Systems Group, spoke at the Intel Developer Forum (IDF) in Beijng, China.

    An early example of this vision is the Project Scorpio rack, which was jointly developed by Tencent, Baidu, Alibaba and China Telecom, who worked with technical advisors from Intel. In the Scorpio rack, the fans and power supplies are shifted from individual servers to rack-level “power zones.”

    Not Unique, But A Step on the Journey

    That in itself is not a unique concept, as there have been a number of designs that have shifted fans and power to the rack level, including various blade chassis, the CloudRack from SGI/Rackable and the Open Compute designs. But Intel sees Scorpio and similar efforts as the first step toward a larger rethinking of the server rack.

    A prototype of  the next phase was on display at the Open Compute Summit in the form of a photonic rack featuring technology from Intel, Corning, Quanta Computer and Facebook, which uses high-speed optical network connections to “disaggregate” the server, taking components that previously needed to be bound to the same motherboard and spreading them out within a rack.

    The Intel/Quanta rack separates components into their own server trays – one tray for Xeon CPUs and another for its latest Atom CPUs, another for storage. When a new generation of CPUs is available, users can swap out a the CPU tray rather than waiting for an entire new server and motherboard design. The design is enabled by silicon photonics, which is at the heart of Intel’s interest in next-generation rack design.

    Moving Beyond Proximity

    Silicon photonics uses light (photons) to move huge amounts of data at very high speeds over a thin optical fiber rather than using electrical signals over a copper cable. Intel’s prototype can move data at up to 100 gigabits per second (Gbps), a speed that allows components to work together even when they’re not in close proximity to one another.

    Intel’s strategy for evangelizing new rack designs involves participating in collaborative efforts like Open Compute and Scorpio, but also putting forward its own vision for how silicon photonics can revolutionize rack design (See Meet the Future of Data Center Rack Technologies by Intel’s Raejeanne Skillern, for more details). The next phase is a “modular refresh” of components, which is of particular interest to Facebook.  When a new generation of CPUs is available, users can swap out a the CPU tray rather than waiting for an entire new server and motherboard design.

    At IDF Beijing, Intel articulated this vision of how rack scale architecture will change how servers are built and refreshed. “Ultimately, the industry will move to subsystem disaggregation where processing, memory and I/O will be completely separated into modular subsystems, making it possible to easily upgrade these subsystems rather than doing a complete system upgrade.”

    Separating the processor refresh cycle from other server components would create some interesting possibilities for Intel, which currently works closely with OEMs to coordinate the inclusion of new chips in new server releases. Facebook hardware guru Frank Frankovsky has said the ability to easily swap out processors could transform the way chips are procured at scale, perhaps shifting to a subscription model.

    For now, Intel is developing a reference design that utilizes Intel technologies – including Xeon and Atom SoCs, Intel Ethernet switch technology and silicon photonics – that can be used by OEM providers to develop and deliver racks. The company has already developed a design guide for the Open Compute Project featuring intra-rack optical interconnect scheme that utilizes a New Photonic Connector (NPC).

    Here’s a slide from Intel’s IDF Beijing presentation that outlines its vision for rack scale architecture (click for a larger image):

    intel-racks-470

    << Previous Day 2013/04/10
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org