Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, February 20th, 2013
| Time |
Event |
| 1:30p |
Meet the Future of Data Center Rack Technologies Raejeanne Skillern is Intel’s director of marketing for cloud computing. Follow her on Twitter @RaejeanneS
 RAEJEANNE
SKILLERN
Intel
The Open Compute Summit just keeps getting bigger and better. By the numbers, the two-day event held in Santa Clara in mid-January this year drew three times the crowd of the 2012 gathering – amounting to more than 1,500 attendees! I could barely get a hotel room in the area due to the large number of people coming in for this event.
The summit is a meeting place for the people and organizations that support the Open Compute Project, an initiative announced by Facebook in April 2011 to openly share data center designs across the industry. And with the growth of this summit, it was clear that end users and vendors alike are getting involved and sharing ideas to make this a reality.
At the event, Intel (a founding member of the Open Compute Project) announced our collaboration with Facebook where we are defining next-generation rack technologies and how we will enable these technologies through Open Compute. As part of this collaboration, our two companies unveiled a mechanical prototype, built by Quanta Computer, that includes Intel’s new and innovative photonic rack architecture. This prototype showed the cost, design and reliability improvement potential of a disaggregated rack environment using Intel processors and SoCs, distributed switching with Intel switch silicon, and interconnects based on Intel silicon photonics technologies (green cables in photo below).
 This rack prototype was unveiled at Open Compute Summit. Intel’s photonic rack architecture, and the underlying Intel silicon photonics technologies, will be used for interconnecting the various computing resources within the rack. (Photo by Intel.)
That’s the big picture—and the big news. Let’s now drill down into some of all-important details that shed light on what this announcement means in terms of the future of data center rack technologies.
What is Rack Disaggregation and Why is It Important?
Rack disaggregation refers to the separation of resources that currently exist in a rack, including compute, storage, networking and power distribution, into discrete modules. Traditionally, a server within a rack would each have its own group of resources. When disaggregated, resource types can then be grouped together, distributed throughout the rack, and upgraded on their own cadence without being coupled to the others. This provides increased lifespan for each resource and enables IT managers to replace individual resources instead of the entire system. This increased serviceability and flexibility drives improved total cost for infrastructure investments as well as higher levels of resiliency. There are also thermal efficiency opportunities by allowing more optimal component placement within a rack.
Intel’s photonic rack architecture, and the underlying Intel silicon photonics technologies, will be used for interconnecting the various computing resources within the rack. We expect these innovations to be a key enabler of rack disaggregation.
Why Design a New Connector?
Today’s optical interconnects typically use an optical connector called MTP. The MTP connector was designed in the mid-1980s for telecommunications and not optimized for data communications applications. At the time, it was designed with state-of-the-art materials manufacturing techniques and know-how. However, it includes many parts, is expensive, and is prone to contamination from dust.
The industry has seen significant changes over the last 25 years in terms of manufacturing and materials science. Building on these advances, Intel teamed up with Corning, a leader in optical fiber and cables, to design a totally new connector that includes state-of-the-art manufacturing techniques and abilities; a telescoping lens feature to make dust contamination much less likely; with up to 64 fibers in a smaller form factor; fewer parts – all at less cost.
What Specific Innovations Were Unveiled?
The mechanical prototype includes not only Intel silicon photonics technology, but also distributed input/output (I/O) using Intel Ethernet switch silicon, and supports Intel Xeon processor and next-generation system-on-chip Intel Atom processors code named “Avoton.”
These innovations are also aligned to Open Compute projects underway. The Avoton SOC/memory module was designed in concert with the writing of the CPU/memory “group hug” module specification that Facebook proposed to the OCP board work group at the summit. The existing OCP Windmill board specification (that supports the 2S Xeon processors) will be modified to show that the power and signal delivery to the board was modified to interface with the OCP Open Rack v1.0 specification (for power delivery through 12V bus bars) and for networking (to interface with a tray-level mid-plane board that holds the switch mezzanine module. Intel will also contribute a design for enabling a photonic receptacle to the Open Compute Project (OCP) and will work with Facebook*, Corning*, and others over time to standardize the design.
What About Other Innovations?
Intel has already delivered several innovations to the Open Compute Project and its working groups to enable future designs based on Intel Architecture. These innovations span board, system, rack, and storage technologies.
Here’s an example of how Open Compute Project investments are driving new technologies and products available on Intel Architecture.
 Motherboards, storage, racks and management technologies are all running on Intel architecture, with multiple vendors.
In particular, Intel has been working with the OCP community to finalize the Decathlete board specification for a general-purpose, large-memory-footprint, dual-CPU motherboard for enterprise adoption. We expect that in 2013 several end users will be purchasing products from OEMs (Quanta & ZT Systems today) based on Decathlete. Intel also supported Wiwynn’s design efforts using the current Intel SoC roadmap to enable Knox Cold Storage (Centerton today, Avoton in the future).
 Intel has been working with the OCP community to finalize the Decathlete board specification for a general-purpose, large-memory-footprint, dual-CPU motherboard for enterprise adoption.
Want to Dive Even Deeper?
To learn more about silicon photonics, see Intel’s video, “How Silicon Photonics Works” and to hear more about silicon photonics potential impact on the data center, see Data Center Knowledge’s story and video with Jeff Demain of Intel Labs, “Silicon Photonics: The Data Center at Light Speed.” For a look at innovations driven by all the contributors to the Open Compute Project, visit the Specs & Designs section of the Open Compute Project website.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:33p |
Interxion Continues European Expansion  Entrance of an Interxion data center.
Interxion (INXN) announced the construction of its second data centre in Stockholm (STO 2) and expansions to its Frankfurt 6 data centre (FRA 6.3) and Copenhagen 1 data centre (CPH1).
STO 2 is being completed in two phases, each providing 500 square metres of equipped space. Phase 1 will have 2MW of power, and is set to be operational in the second quarter of 2013. The Copenhagen expansion will provide 300 square meters of equipped space and is also targeted for completion in the second quarter. These two investments combined will total €17 million ($22.7 million).
“As a leader in the Scandinavian market, Interxion is expanding its capacity to meet the needs of the marketplace,” said David Ruberg, Interxion’s Chief Executive Officer. “Interxion has seen strong growth in the Stockholm market, primarily driven by our communities of interest. We have expanded our Stockholm data centre twice in the past 18 months and continue to see strong demand in Stockholm. STO 2 will provide critical equipped space to meet our customers’ expansion requirements.”
Interxion announced a €5 million ($6.7 million) expansion of its FRA 6 data centre by 600 square metres of equipped space. The expansion is scheduled to be operational in the first quarter of 2013.
“Demand for Interxion’s Frankfurt campus, the best-connected data center campus in Europe, remains strong,” said Ruberg. ”Fill rates for Frankfurt 7 have met our expectations and FRA 6.3 will provide additional equipped space to meet the demands we see in the marketplace.”
Interxion also noted that 400 square meters were opened in its London LON2 data center, 600 square meters in Madrid MAD 2 are set to open in the first quarter of this year, and the remaining 2,500 square meters in Paris PAR7 are scheduled to open by the end of the first quarter of 2013. | | 2:39p |
Amazon OpsWorks: Empowering and Disrupting 
This week Amazon Web Services got the attention of the cloud computing community with its announcement of OpsWorks, which provides new configuration and automation features for applications housed on AWS. ”With AWS OpsWorks, you can deploy your applications to 1,000s of Amazon EC2 instances with the same effort as a single instance,” the company notes. OpsWorks is based on technology AWS acquired last year from Peritor, the creators of Scalarium.
OpsWorks is free and allows AWS customers to use Chef Recipes to make system-level configuration changes and to install tools, utilities, libraries, and application code on the EC2 instance within an application. While providing a powerful new tool for developers, Amazon’s introduction of OpsWorks has also left many observers wondering how it will impact companies that offer configuration and management tools for AWS.
There has been much discussion of OpsWorks. Here’s a look at the notable analysis and commentary from around the web:
All Things Distributed – The AWS view from Amazon CTO Werner Vogels: “Application management has traditionally been complex and time consuming because developers have had to choose among different types of application management options that limited flexibility, reduced control, or required time to develop custom tooling. Designed to simplify processes across the entire application lifecycle, OpsWorks eliminates these challenges by providing an end-to-end flexible, automated solution that provides more operational control over applications.”
GigaOm Cloud – Barb Darrow writes: “The addition of OpsWorks to the AWS repertoire shows how Amazon is serious about adding higher-level and more intricate services to its stack as it hopes to lure more enterprise accounts. Those additions can be a double edged sword — they add functionality that many customers want but are getting from open-source and third-party toolsets. What’s good for AWS and some of its customers is definitely not a plus for some AWS partners.”
The Register - Jack Clark assesses OpsWorks’ impact on the AWS ecosystem: “The rollout of the technology is likely to make life uncomfortable for existing AWS partners, such as automation specialist Puppet, platform-as-a-service AppHarbor, and application management specialist Progress Software, among others. Developers now have a choice between doing it all through Amazon, or adding in another vendor’s tech – and therefore another layer of complication – to their particular cloud recipe. It also affects Amazon competitors such as Rightscale, a company whose main business involves the management and automation of public and private clouds.”
CIO.com - Whare OpsWorks fits: “OpsWorks will be offered alongside existing management offerings Elastic Beanstalk and CloudFormation. While Elastic Beanstalk is specifically optimized for the most common Web applications and application middleware, OpsWorks can be used with anything from simple Web applications to highly complex applications. CloudFormation focuses on providing foundational management capabilities without prescribing a particular model for development and operations. ”
Hacker News – This discussion thread is wide-ranging, but notes that OpsWorks doesn’t integrate with many other AWS services. | | 2:51p |
Cyan Unveils Compact 100G Optical Technology Cyan announced the availability of a single‐slot 100G coherent transponder for the Z‐Series family of packet‐optical transport platforms (P‐OTPs). Delivering 100G in a single slot module allows its service provider and enterprise customers to experience the highest networking speeds in a smaller form factor.
The DTM-100G lets service providers select integrated support for short, medium, long, and extended reach C form‐factor pluggable (CFP) client interfaces. When combined with Blue Planet, Cyan’s software‐defined network (SDN) system developed for service providers and other network operators, the DTM‐100G is one of the first SDN‐based 100G solutions. Early deployments of the DTM 100G include Great Plains Communications in the US, among others.
“The ability to deliver coherent 100G transport services in such a compact form factor is unprecedented,” said Michael Hatfield, Cyan president. “It is in keeping with the philosophy that has characterized Cyan from the beginning—harnessing the latest technical innovations to help our customers scale their networks while driving down costs. Reducing slot consumption, improving optical reach flexibility, and eliminating the need for external modules yields a 100G solution that can be deployed in scale.”
The DTM-100G is compatible with existing Z-Series DWDM components and features DSP-based chromatic dispersion and polarization-mode dispersion compensation. Other features include the ability to support 100 Gigabit Ethernet and G.709 OTU4 regeneration, the ability to mix 10G and 100G channels, 1+1 optical protection using Cyan’s Optical Protection Switch module, and support for client SR10 (100 meter), LR10 (2 kilometer), LR4 (10 kilometer) and ER4 (40 kilometer) interfaces.
“The flexibility and form factor of the DTM‐100G are perfect for our network,” said John Greene, chief network engineer at Great Plains Communications. “As we build out our network, we typically do notPAGE 2 know how far our customers will be from our points of presence. The optical reach flexibility inherent inthe DTM‐100G means that we don’t have to.” | | 3:00p |
HP Strengthens BladeSystem Converged Infrastructure  HP this week announced significant enhancements to its BladeSystem c-Class portfolio. (Photo: HP)
At the 2013 HP Global Partner conference in Las Vegas this week, HP announced several new innovations in converged infrastructure. The event conversation can be followed on Twitter hashtag #HPGPC.
HP BladeSystem c7000
HP announced significant enhancements to its BladeSystem c-Class portfolio, with three new components, including the BladeSystem c7000 Platinum enclosure, a new HP ProLiant WS460c Generation 8 Server blade and major enhancements to its HP Virtual Connect product family. The new HP BladeSystem c7000 Platinum enclosure improves efficiencies, simplifies management and optimizes power while remaining compatible with previous generations of HP blade servers and interconnects. It features the new SX1018 HP Ethernet switch, with 40Gb downlinks to each blade server. It includes HP SmartMemory, a Three Rank (3R) 24GB Registered DIMM, which enables a 25 percent increase in speed over previous generations. The c7000 also features location and power discovery tools to allow customers to track server locations remotely from a central console.
HP Virtual Connect 4.0 was introduced, as a network management tool to simplify connectivity, enable troubleshooting and boost network reliability. It extends comprehensive integration with existing enterprise networking environments. It features real-time network flow monitoring, and enhanced quality of service (QoS). A new HP ProLiant WS460c Gen8 workstation server blade can support virtualized client solutions with high-density 3-D graphics and eight GPUs per blade. Enabling four times more users per workstation, the WS460c can reduce costs by up to 60 percent per user compared to previous generations.
“Adopting a virtualized platform can be a daunting task as many organizations lack the scalable infrastructure required to grow their businesses,” said Chuck Smith, vice president and general manager, Blades to Cloud Business Unit, Industry Standard Servers and Software, HP. “The additions to the HP BladeSystem portfolio provides built in flexibility to accommodate future technology innovations. HP has elevated our blade offerings and allows customers to prepare their current infrastructure for virtualization while providing our channel partners with new revenue opportunities.”
HP StoreVirtual Storage Systems
HP announced new StoreVirtual systems based on ProLiant Gen8 technology and LeftHand OS 10. The new systems include reliability and availability enhancements optimized for virtualization projects. Anew channel-only midrange storage solution was also announced, that combines HP 3PAR StoreServ Storage and HP StoreOnce Backup functionality to deliver primary block-and-file storage with information protection in a single system.
New StoreVirtual 4530 and 4730 storage systems feature 10 times greater memory, four times larger cache and 10 gigabit iSCSI native connectivity on all models. With 3TB drives the new systems deliver 50 percent more density than its predecessor. HP StoreVirtual Storage features all-inclusive software licensing, expansive enterprise class storage features and low deployment cost.
Unified wired and wireless BYOD solution
HP announced new unified wired and wireless solutions that deliver a simple, scalable and secure network supporting bring-yourown-device (BYOD) initiatives while creating incremental. The offerings also enable partners to leverage the HP FlexNetwork architecture to better support their clients’ BYOD essentials with new device on-boarding and provisioning functionalities through a single management application and automated security with software-defined networks (SDN) technology, while being supported by mobility connectivity services.
“Organizations are struggling to deploy BYOD solutions within a complex, legacy infrastructure that spans two separate networks and management applications,” said Bethany Mayer, senior vice president and general manager, Networking, HP. “HP’s complete unified BYOD solution is the first to solve this issue and—combined with HP’s comprehensive training, programs and services—will create new, profitable opportunities for partners.”
To double network scalability over legacy infrastructure, the new OpenFlow-enabled HP 2920 Switch Series speeds data transfer by up to 45 percent, while increasing performance by up to 100 percent. Additionally, the new HP 830 Unified/WLAN Switch eliminates the need to purchase up to 50 percent of traditional network access devices, including separate switches and controllers, while supporting up to 1,000 wireless devices. | | 3:30p |
10 Considerations in Building a Global Data Center Strategy It would be imprudent to oversimplify all the tangible and intangible elements that need to be fully understood and evaluated when creating a global data center initiative. Yet here are ten considerations to evaluate when building your global data center strategy. This is the forth article in a series on Creating Data Center Strategies with Global Scale.
1) Site Selection and Risk Factors – Knowing Where to Build
Once you have selected a general geographic area, it takes a very experienced team to fully evaluate the suit-ability of a foreign location to build a new data center. Identifying risk factors, both the obvious ones, such as known seismic or flood zones, or the less obvious ones, such as adjacencies to “invisible” but potential hazards, such as airports and their related flight paths, must be an essential part of the final decision.
2) Geopolitical Ownership Considerations
Beyond the basic factors related to physical and logistical resources, the political stability of the country and region should be considered. In some cases the nationality or type of organization of the owner or tenant may make it a target for local political factions.
Insurance costs and even the ability to get coverage may be impacted by building a data center in potential lucrative and growing markets, but which may have a higher risk profile, than a nearby country that has viable communications bandwidth into the target market.
However, be aware that in some volatile or politically restrictive countries, internet traffic is filtered, blocked and or monitored.
3) Global Risk Issues
Given the recent and more frequent catastrophic weather related events affecting even highly developed areas, we all need to review and perhaps re-evaluate our basic assumptions. While there is still some contention about how much Global Warming impacts the world, it is no longer a matter of “if”. Planning based on 100 Year Flood Zones may no longer be considered ultra conservative. The evaluation of any potential data center or other critical infrastructure site is not a cut and dried exercise. Geographic diversity for replicated or back-up sites is no longer an option, it is a necessity.
4) Extended Operation and Autonomy During a Crisis
Regarding availability and continuous operation, how much fuel should be stored locally (i.e. 24 hours, 3 days a week)? During a small localized utility failure 24 hours of fuel may be previously considered adequate, but given more recent events 3-7 days offers a better safety margin. During an extended widespread crisis, the relied upon expectations of daily refueling may prove to be difficult, if not impossible to achieve (case in point, Hurricane Katrina, and “Super Storm” Sandy). In some cases, so much of the general infrastructure was dam¬aged that even fuel availability and delivery to back-up generators became a severe problem (both for data centers, and their employees, limiting their ability to get to work). In the end, you will typically pay more for the co-lo with the greatest levels of redundancy, resources and better SLAs, but would be imprudent to assume that nothing will ever happen to impact the operation of your own data center because you are in a “safe” area. Storing more fuel may be a small overall price to pay for the extended autonomy and could be the difference between being operational or shut down during a major crisis.
Also understand that these same problems would potentially impact your communications providers, so investigate their capabilities for extended operations during a crisis. It is useless if your data center is operational, but your have no viable communications network during a major event.
5) Availability and Cost of Power and Water
Of course picking a site location that is physically secure and has reliable access to power, water and communications is an important first step. Since energy is the most significant operating cost of a data center, focus your attention on the cost of power and its long term impact. Energy costs are highly location dependent and are based on local or purchased power generation costs (related to fuel types or sustainable sources such as, hydro, wind or solar), as well as any state and local taxes (or tax incentives). In the United States rates vary but are generally low compared to some foreign markets. Internationally energy costs are higher and can vary widely. It is important to check local rates and look for utility and energy incentives. Some countries are offering tax and other incentives to build data centers. Another factor is location and long term overall market demand for constrained resources such as power and water, which can ultimately limit the data center capacity.
If the site is relatively remote and needs to be newly developed, be sure to factor in the cost of bring¬ing in new high voltage utility services, which can be expensive and require long lead times to have planned, approved and constructed.
Site selection can also directly impact the facility’s energy efficiency. The relative energy efficiency of the data center facility infrastructure is measured as “Power Usage Effectiveness” “PUE”), as well as the IT equipment use of power vs its computing performance. One of the largest uses of energy is cooling and is location dependent, since it is related to the ambient temperature and humidity conditions. With the rising acceptance of the use of outside air for “free cooling”, picking a location with a moderate climate can offer the opportunity to save a significant amount of energy cost over the long term, as well a lower initial capital investment by the reduced need for mechanical based cooling systems. For more details see part 3 of this series. | | 4:23p |
NetApp Expands Portfolio of All-Flash Arrays NetApp (NTAP) announced a new all-flash array and high-end flash-optimized storage systems.
EF540 Flash Array plus FlashRay architecture
NetApp announced the availability of the EF540 all-flash array for extreme performance-driven enterprise applications. Built on the SANtricity operating system the EF540′s fault tolerant achitecture delivers 300,000 IOPS and sub-millisecond data access.
“A new class of arrays is unlocking flash’s full potential and delivering capabilities that accelerate the performance, reliability, and efficiency of enterprise data centers,” said Jeff Janukowicz, research director, Solid State Storage and Enabling Technologies at IDC. “For all-flash arrays to gain broader market adoption, it is important to look beyond the performance improvements and deliver must-have reliability, availability, and supportability features. The continued growth of NetApp flash storage systems, underscores the value of the company’s approach to managing and storing the massive amounts of data being created today.”
NetApp also previewed the architecture of its new, purpose-built FlashRay product family, which will deliver scale-out and efficiency features to maximize the benefits of all-flash arrays. The new product line will combine consistent, low-latency performance, high availability, and integrated data protection with enterprise storage efficiency features such as inline deduplication and compression. The product line will be generally available in early 2014.
Flash-optimized enterprise storage systems
NetApp introduced new high-end storage systems – the FAS/V6220, FAS6250, and FAS6290 – to address the demanding performance and capacity requirements of enterprise organizations. The FAS and V-Series platforms are designed for 99.999 percent availability and leverage clustered Data ONTAP for nondisruptive operations even during upgrades. The new enterprise storage systems can scale to over 65PB. All models support flash offerings, which increases IOPS over 80 percent and reduces latency by up to 90 percent.
“Enterprises need storage and data management solutions that have the scalability for short-term efficiencies and long-term growth,” said Brendon Howe, vice president, Product and Solutions Marketing, NetApp. “NetApp’s FAS storage platforms and integrated flash portfolio are architected for mission-critical SAN and NAS environments to deliver agility at scale, nonstop operations, and ease of management. These systems have the performance and scale for virtualized and cloud environments, enabling IT administrators to make the appropriate infrastructure decisions, whether it is on-premise or in the cloud.” | | 8:44p |
NTT Communications Adds Enterprise Cloud Locations  NTT Communications’ Singapore Serangoon Data Center is one of three facilities in which the company is adding its enterprise cloud computing service offering. (Photo: NTT Communications)
NTT Communications launched its Enterprise Cloud last year, and is hoping its initial successes translate globally. New locations were announced today, as the company made its cloud available worldwide through data centers in Asia, the United States, and Europe.
NTT Communications’ Software-Defined Networking (SDN)-based Enterprise Cloud was initially launched via data centers in Japan and Hong Kong in June 2012. Today’s expansion adds Enterprise Cloud locations in Singapore, Virginia and California in the US, and England. NTT anticipates opening three more data centers in Australia, Malaysia and Thailand in March 2013.
“NTT Communications’ Enterprise Cloud is a full-layer, self-manageable virtual private cloud that is now global, and growing to incorporate virtualized networks in eight countries and nine locations by March 2013,” said Motoo Tanaka, Senior Vice President of Cloud Services at NTT Communications.
NTT noted it is seeing strong interest from global enterprises who view Enterprise Cloud as a flexible extension of their own data centers, enabling them to connect existing private networks to the cloud and gain additional cost-optimized and secure compute capacity.
“NTT Com understands the enterprise client, their struggles, goals and needs,” said Tanaka. “Being truly enterprise class is what makes NTT Com the leading partner of choice for client cloud transformation through comprehensive cloud lifecycle services, and is what has led us to develop this real-world cloud, built on a foundation of advisory, migration, operational and management services.” |
|