Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, February 13th, 2014
Time |
Event |
12:30p |
MapR Hadoop Upgrade Spins YARN, Supports HP Vertica Analytics Platform At the O’Reilly Strata Conference in Santa Clara this week MapR Technologies announced the latest MapR Distribution including Hadoop 2.2 with YARN, an early access release for HP Vertica Analytics Platform on MapR, and a free Sandbox for Hadoop.
YARN in the new MapR release delivers next-generation resource management by combining flexible resource management with MapR’s data platform. By combining YARN with MapR’s read-write (R/W) POSIX data platform, MapR enables YARN-based applications to run on a Hadoop cluster and share compute resources, and also read, write and update data in the underlying distributed file system and database tables. This will allow organizations to develop and deploy a broader set of Big Data Hadoop applications. Users can also now run the Hadoop MapReduce 1.x and YARN schedulers on the same nodes in the cluster simultaneously, providing an easy and risk-free path for MapReduce 1.x users to upgrade to the new Hadoop scheduler. With this release MapR now includes over one dozen open source projects, including Apache projects Hive, Pig, Solr, Oozie, Flume, Sqoop, HBase, and ZooKeeper.
“As YARN expands Hadoop use cases in the enterprise, the need for enterprise-grade dependability, interoperability and performance increases exponentially,” said Tomer Shiran, vice president, product management, MapR Technologies. “The combination of YARN and the MapR Data Platform delivers the only distribution for Hadoop in which both YARN and non-YARN distributed Big Data applications share the compute and storage resources of large-scale clusters.”
HP Vertica Analytics Platform on MapR
MapR announced the early access release of the new HP Vertica Analytics Platform on MapR, which provides 100 percent ANSI SQL-compliance, with advanced interactive analytic capabilities, and business intelligence (BI) and ETL tool support.
“HP Vertica Analytics Platform on MapR is a great example of a true Big Data architecture, where powerful analytics and SQL are tightly integrated with the full power and breadth of data in Hadoop, giving customers new insights to their business,” said Colin Mahony, VP and general manager, HP Vertica. “This combination of industry-leading platforms provides organizations with an integrated solution that increases performance and reliability with a smaller data center footprint, eliminating technology limits that often force businesses to make compromises.”
MapR also announced a free MapR Sandbox for Hadoop – a fully-configured virtual machine installation of the MapR Distribution for Apache Hadoop that enables users to begin exploring and experimenting with Hadoop in less than five minutes. The sandbox also includes several point-and-click tutorials for developers, analysts, and administrators.
“Hadoop is widely considered the ideal platform for handling Big Data, and the MapR Sandbox is about addressing the common challenge of Hadoop adoption,” said Tomer Shiran, vice president of product management, MapR Technologies. “Organizations face a shortage of Hadoop developers and data scientists, and without useful and easily-accessible training tools, productive Hadoop developers will continue to be in short supply. With the MapR Sandbox, developers have all the tools they need in a convenient and free package to get up to speed on Hadoop quickly.” | 1:30p |
Data Center Interconnection: Outlook for 100G DWDM Pizza Boxes Mark Lutkowitz’s present position is Product Sales Strategist at PacketLight Networks, which supplies DWDM and OTN equipment for networks providing data, storage, video, and voice services, including for dark fiber applications.
 MARK LUTKOWITZ
PacketLight Networks
While it will remain a 10G world well into the future, the necessity to link data centers at the 100G rate is gradually climbing – and the pricing on these optics is expected to come down dramatically in the near term.
Also, despite some companies blessed with ample space in their DCs, there is a good number of collocation centers, particularly in big metropolitan areas, in which the cost to rent and power an additional half or full rack needs to be factored into the entire recurring monthly cost (not to mention the enterprises exhausting their current space). In addition, the fiber optic infrastructure might be exhausted and utilizing it spectrally better with a single 100G wavelength in comparison with 10 channels of 10G may prove itself extremely beneficial in the short and long terms.
Building a Layer 1 DWDM optical network between data centers, provides unconstrained bandwidth in a ultra-low latency and high-security, autonomous environment. This strategic, managed optical network with the flexibility of both agnostic type- and rate- service offerings, as well as network configurations, are frequently quite compelling.
Furthermore, part of the optical infrastructure resources can be sold to third-party companies with a need for bandwidth between the same facilities. A common way is allocating several wavelengths and renting them. Support of alien wavelengths could easily be a requirement in order to continue to take advantage of the existing infrastructure as a source of revenue.
Network Evolution is Near
There is a high likelihood that much of the world market will ultimately follow the lead of several of the Tier 1 and 2 carriers in Japan. Instead of running over their existing backbone infrastructure, they are moving toward 1RU CPE solutions to facilitate 100G Managed Service Level Agreements with enterprises to interconnect DCs. With extensive prices pressures, the name of the game is going with the lowest cost solution. At the same time, the expectation of these companies is that such compact boxes will provide at least as much functionality found in a traditional DWDM chassis – encompassing pluggable optics on the client and network sides as well as remote management and performance monitoring – even the ability to perform as a media converter to deliver a 100G service is being requested.
Certainly, the critical cost element for a carrier is inherently helped with a pizza box compared with a full chassis in that the operational work is easier and the deployment is faster. There have been ample demonstrations of this tendency with 10GbE and GbE services.
One of the biggest aspects in the 100G pizza-box business related to keeping the operations cost down is the degree to which the customer can be given the choice of having features incorporated into that single device, such as the amplifiers, tunable DCMs, mux/demuxes, and optical switches (for protection). Moreover, the ability to have both the muxponder and transponder modes in the same solution can be a huge differentiator.
Seven additional key success factors include the following:
- Adapting to the trend of longer distances demanded for data center interconnection as the results from disasters in recent years have demonstrated the need for greater separation. Using fewer wavelengths at higher capacity reduces the solution cost with the increase in distance.
- Redundant AC power supplies, as well as pluggable fan units, with front access for easy maintenance, which is quite attractive for data centers.
- Ease of maintenance and adequate support as some data center engineers are not familiar with DWDM technology – very user-friendly network management tools are vital.
- Same platform to migrate from 10G to 40G to 100G, as well as provide any mixture of 10G/40G services in the muxponder (as alluded to above).
- Product development experience with 1RU transport products, allowing for the compacting of the largest number of optical ingredients. It is about skinning down from a standard chassis to fit into a pizza box to reduce the power consumption, rack space, and external fiber patches.
- Expand the optical network future capacity without being bounded to a certain chassis, which may limit the future expansion to 400G (obviously, even way further out than 100G in terms of mass deployment). A stackable solution can be much more robust from future limitations on power usage, slot size, and backplane design.
- Making sure that in the very unlikely event of the internal CPU failing, it is completely isolated from the traffic path. Rebooting can occur with a software download to the device.
Other important competitive advantages for 100G DWDM pizza box suppliers in penetrating the DC interconnection space can include support of 40G with QSFP+, capabilities of passing the 100G over ROADMs, protection of optics/interfaces, usage of direct detection solutions (as opposed to coherent technology), and the employment of standard optics.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 2:00p |
Cray Selected for $40 Million Department of Defense Contract  The Cray XC-30 supercomputer (Photo: Cray)
Cray announced that it has been awarded two supercomputing contracts totaling more than $40 million to provide the Department of Defense (DOD) High Performance Computing Modernization Program (HPCMP) with three Cray XC30 supercomputers and two Cray Sonexion storage systems. A Cray XC30 supercomputer and Sonexion storage system will be delivered to the U.S. Air Force Research Laboratory (AFRL) in Ohio, and two of each of those systems will be delivered to the Navy DOD Supercomputing Resource Center (Navy DSRC) located at the Stennis Space Center in Mississippi.
“Supercomputing is a critical enabler for the wide variety of science, technology, test, evaluation, and acquisition engineering communities that the DOD HPC Modernization Program supports,” said John West, director of the DOD’s High Performance Computing (HPC) Modernization Program. “These new systems are a key component of our strategy of making sure the DOD’s scientists and engineers have access to the most modern, capable, and usable computational tools available. We are especially pleased that the successful completion of this purchase marks the realization of the potential value of our streamlined process for large system acquisition, with benefits for both the government and our commercial partners.”
6 Petabytes of Storage
Both the Air Force and Navy selected the Cray Sonexion scale-out Lustre storage system – for a total of more than six petabytes of storage capacity and more than a third of a terabyte per-second of storage performance across both systems. Cray’s Sonexion storage system combines Cray’s Lustre expertise with a unique design that allows scalability from five gigabytes per-second to more than a terabyte per-second in a single file system. Management is simplified through an appliance design with all storage components including software, storage and infrastructure.
“The DOD High Performance Computing Modernization Program shares a number of the same attributes as our company — both organizations are technology-driven, innovation-focused and committed to providing researchers and engineers with advanced supercomputing technologies for taking on important missions,” said Peter Ungaro, president and CEO of Cray. “Cray has a long, proud history with the HPCMP, and we are honored that our flagship Cray XC30 supercomputers and Sonexion storage systems will play a vital role in this important program.”
The DOD HPCMP remains focused on its mission to accelerate technology development and transition into superior defense capabilities through the strategic application of high performance computing (HPC), networking and computational expertise. The HPCMP provides the people, expertise and technologies that increase the productivity of the DOD’s Research, Development, Test and Evaluation community. | 3:00p |
Nimble Storage Launches New Converged Infrastructure Solution  Nimble Storage announced SmartStack, a new converged infrastructure solution to address the storage performance challenges for desktop and server virtualization. workloads
Flash-optimized storage provider Nimble Storage announced SmartStack, a new converged infrastructure solution to address the storage performance challenges for desktop and server virtualization workloads.
The new SmartStack for Desktop and Server Virtualization delivers the high performance required for VDI and server virtualization, while at the same time providing high levels of reliability and data protection through the snapshot and replication features built into Nimble’s Cache Accelerated Sequential Layout (CASL) architecture. With this, enterprises are able to accelerate deployment while minimizing the risks associated with deploying solutions with a pre-validated reference architecture based on Cisco Unified Computing System with VMware vSphere, VMware Horizon View and Nimble Storage.
“Virtualized applications and VDI environments are workloads with differing characteristics. Customers continue to seek simplified approaches for deployment and management of these workloads in their infrastructures,” said Mason Uyeda, senior director of technical marketing, End-User Computing, VMware. ”Nimble Storage with VMware Horizon View™ can provide a foundation for storage planning and architectural design, with the flexibility to scale as these diverse workloads grow.”
The solution will be offered as an integrated solution through Avnet Technology Solutions in mid-March of this year.
“Our ability to provide configuration services and testing validation for SmartStack solutions is an expansion of our support for the Nimble channel ecosystem, which has already simplified the procurement process for resellers,” said Scott Look, vice president and general manager, Connected and Secured Solutions, Avnet Technology Solutions, Americas. ”Leveraging our pre-integration services enables channel partners to gain a competitive edge by strengthening their offerings and speeding time to a complete solution deployment, while minimizing costs and investment risk.” | 3:30p |
Cyan Packet-Optical Enhancements and SDN Programmability Looking to usher in a new era for carrier network transformation and empower 100 Gigabit Ethernet transitions, Cyan (CYNI) announced major enhancements to both the Cyan Z-Series Packet-Optical hardware and Blue Planet SDN Platform. Cyan’s packet-optical transport platforms, coupled with its Blue Planet SDN and NFV orchestration platform built for network operators have delivered a simplified end-to-end architecture. Cyan is launching enhancements to the Z-Series Packet-Optical Transport Platform (P-OTP) and adding service automation features and an open API to the Blue Planet platform.
Ushering in the 100G Era
Infonetics’ latest global service provider survey confirms that major changes are underway in the carrier network. “Operators are roughly doubling their use of packet-optical transport nodes in access, aggregation, and metro core by 2015 as an alternative or supplement to routers, and they are rapidly adopting 100 Gigabit Ethernet over the next few years,” said Michael Howard, co-founder and principal analyst for carrier networks, Infonetics Research. “With 75 percent of the operators we talked to using packet-optical transport systems now or by 2016, and 100 Gigabit Ethernet growing to 31% of their 10G/40G/100G Ethernet ports purchased during 2015, it’s clear that operators around the world plan for both technologies to play major roles as they re-architect their metro networks for greater simplicity and scalability. This is an important first step in reaching their goals to automate and orchestrate service delivery.”
Facilitating the move to 100G and 100 Gigabit Ethernet in metro networks, the new release for the Z-Series P-OTP improves optical capacity and helps to deliver more scalable packet aggregation and transport capabilities. Three new Z-Series modules will be available at the end of the first quarter, including the LME-10G10 – a 10 port 10G to 100G muxponder module, the PSW-100G – a 100 GbE packet switching module, and the WSS-F2 and WSS-F4 – two multi-degree ROADM modules. These ROADM modules can support up to 96 100G wavelengths on a single fiber – a 200 percent increase from the previous 40 channels of 10G.
Service Automation and programmability
New features in the Blue Planet platform enable carriers to simplify, manage and orchestrate multi-vendor networks to achieve end-to-end network control, service automation and service agility. Specifically, new Blue Planet features include the addition of A-to-Z provisioning of Metro Ethernet Forum (MEF) services in multi-vendor networks, a “cut-in” and “cut-out” feature that provides for non-disruptive activation of third-party devices on live networks, and new element adaptors for provisioning Accedian (MetroNID GT, GT-S, LT-S) and RAD (ETX-204A) devices.
Expanding the platform vision further, Cyan has announced its Blue Planet open API strategy, which is designed to make the network more programmable and simplify the process of adding new network services. This complements the company’s intent to publish APIs that allow customers and partners to program and write into the northbound interface of Blue Planet to integrate with OSSs and other critical internal systems.
“With over 150 deployments of Cyan Z-Series packet-optical hardware and Blue Planet SDN software, Cyan has learned quite a bit about the network transformation efforts of network operators around the world,” said Mark Floyd, chief executive officer, Cyan. “Packet-optical scale and service orchestration are key topics as network operators simplify and automate their operations to reduce costs and revitalize their business models. In addition, the launch of our API strategy is critical for customers who want to unite existing operational and support systems with Blue Planet to automate service delivery. All of these enhancements are design to ensure Cyan can help network operators deploy scalable and automated networks that can respond to today’s on-demand world.” | 6:15p |
Intel Adds KVM Gateway for Data Center Manager  A screen shot from Intel Datacenter Manager Virtual KVM Gateway, a stand-alone console version of the cross-platform keyboard-video-mouse (KVM) application. (Image: Intel)
Intel has launched the Intel Datacenter Manager Virtual KVM Gateway, a stand-alone console version of the cross-platform keyboard-video-mouse (KVM) application designed to provide IT managers with a single console to remotely diagnose and troubleshoot IT devices, regardless of vendor. The new KVM product is an extension of Intel Datacenter Manager (DCM) software, which captures real-time information on servers’ energy use and temperature and packages it in a data feed.
Intel DCM: Virtual KVM Gateway offers remote control for securely configuring and fixing compatible components such as servers, network switches and storage devices, supporting up to 50 simultaneous sessions even if the device operating system is not functioning. Intel’s solution offers cross-vendor support to address today’s heterogeneous data centers. The software is expected to be available for download as a free trial or through authorized resellers. For more, see the Intel Chip Shot blog. | 9:00p |
New Data Center Design Drives Efficiency Gains for Dupont Fabros  An aerial view of the huge ACC7 data center under construction in Ashburn, Va., where DuPont Fabros Technology is implementing a new design. (Photo: DuPont Fabros)
ASHBURN, Va. - Data center developer DuPont Fabros Technology has overhauled its facility design, seeking to bring the benefits of hyperscale server farms to companies leasing third-party data center space. The prototype for the new design is the massive ACC7 data center at the company’s campus in Ashburn, Virginia.
DuPont Fabros (DFT) has refined its approach to how it cools and powers its data centers, and has shifted from a raised-floor to a combination of a hard floor and a hot-aisle containment system to house customer cabinets. The end result is a data center that maintains DFT’s emphasis on reliability while delivering big improvements in energy efficiency and ease of maintenance. ACC7 will also allow customers to expand their capacity by housing server-filled containers on cement pads next to the data center.
“We recognize that in this industry, things change and evolve,” said Scott Davis, Senior Vice President of Operations for DuPont Fabros Technology (DFT). “We sat down with the design group and looked at the trends (n data center design). We took all those trends and came up with goals. The end result is (a data center that’s ) cheaper to build, requires lower maintenance, and has an industry leading PUE. We never save at the cost of reliability or resiliency.”
The company expects annualized Power Usage Efficiency (PUE) UE to be below 1.14 at 75 percent capacity, and below 1.13 at 100 percent utilization. That’s an improvement on DFT’s current design, which delivers a PUE of 1.28 at other facilities on the Ashburn campus.
With the new design, DuPont Fabros has gone bigger, taking advantages of economies of scale as well as a complete tech overhaul to bring efficiencies to new heights, without sacrifice of redundancy or resiliency. In approaching the design, Davis and the DFT team weighed three goals:
- Is it cheaper to build on a per megawatt basis?
- Does it have lower maintenance costs?
- Does it have industry leading PUE?
Building Bigger, But in Smaller Phases
The ACC7 data center is 446,000 square feet in size and has a total power capacity of a whopping 41.6 megawatts. The building includes 28 large computer rooms, with a standard critical load of 1.486 megawatts each, and the ability to increase density to offer up to 2.1 megawatts each. Each data hall can accomodate approximately 378 standard cabinets.
Dupont Fabros is building out the entire shell, finishing eight computer rooms in the first phase of construction, while the other rooms will be empty shell space. In the past, DuPont Fabros has built out as much as 18 megawatts of finished space at a time. With the new design, they’ll start out with 11.9 megawatts of customer capacity, and build out the building in smaller phases as it leases up.
In a base computing room there are a pair of a+b Power Distribution Unit (PDU) transformers capable of carrying a 1.5 megawatt load. The computer room air handling (CRAH) units are on the perimeter of both walls.
Dupont Fabros is not going to have a raised floor in this design, with all cabling running overhead. Each room has a ceiling height of 13 feet, leaving plenty of room for overhead cabling.
This approach allows Dupont to sell space in smaller chunks. There’s an option to deploy a still mesh fence in the computer rooms to break them up, or to isolate the air handlers from the server area for security reasons.. The PDUs and distribution panels are out of the room to increase the footprint for IT equipment.
Making Greater Use of Free Cooling
The new facility capitalizes on the greater opportunity for free cooling. The company calls its new approach “water side economization plant with chiller assist.” This means that outside air will cool water for the cooling system, using a plate and frame heat exchanger, which is expected to be the primary cooling source for 75 percent of the calendar year. Chillers will kick in on days with warmer temperatures. In the company’s previous design, the chiller plant did all the work. Less use of chillers and pumps leads to lower energy bills. |
|