Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, June 17th, 2013
| Time |
Event |
| 12:30p |
Heard of the Software-Defined Data Center? Here’s the Software-Defined Blog Post Brian Reagan is vice president product strategy and business development, Actifio. He was previously CTO of the global Business Continuity and Resiliency Services division at IBM Corporation, responsible for the technology strategy, R&D, solution engineering, and application development for all global offerings including cloud services.
 BRIAN REAGAN
Actifio
Each calendar year can be easily associated with a “tech meme.” 2011’s Cloud gave way to 2012’s Big Data. 2013 is nearly halfway over and it’s clear that this year’s meme is “Software-Defined”—specifically in my line of work, the “Software-Defined” Data Center.
I’m not suggesting that these secular trends aren’t / weren’t valid. Nor am I saying that these are not transformational forces that will radically alter the way we conceive, design, build, and run IT for the next several decades. They’ve already started to have a significant impact in companies large and small.
My beef is that as each of these tech waves emerged, every tech company under the sun – and even some non-tech companies (try googling “Refrigerator Cloud” for some terrifying stories of home-tech gone bad)– felt compelled to hook their message wagons to the trend, valid or not. Sorry, I’m not buying it when a 30+year software concern – born in the era of mid-range computers, COBOL, and loads and loads of tape – declares their solutions as “cloud-ready.”
Hype Helps Marketers
For the industry gorillas, it’s an easy play as they create these message waves. He who creates, must own. And so it goes. For the startups and emerging players, it can be perceived as a survival skill – adapt your product positioning to the latest industry message trend, or become irrelevant.
Software-Defined was picking up steam on its own, but truly broke through when the Software-defined networking company Nicira was gobbled up by VMware. I’m waiting for the first Software-Defined Software concern to enter the market.
So, now it’s the Software-Defined Data Center. All caps are required when we’re talking about Important Technology Ideas. Messaging and hoopla aside, this is the nirvana that we’ve been talking about for the entire 23+ years I’ve been in this industry. We’ve invented service management frameworks to automate, we’ve developed orchestration stacks, we’ve eliminated friction in the provisioning of resources, we’ve created open standards for system communications. Cloud promised a lot of what we’re seeing in SDDC messaging, yet we’re still not there.
Looking Beyond the Labels
This is certainly a sophisticated topic that has far-reaching architectural implications. But, it does boil down to three simple ideas.
1. Application Centricity
Business Applications are the center of the IT universe. The development and operation of these applications is the CIOs primary concern. Boards of Directors don’t concern themselves with LPARs or LUNs. They care about EPS and, by extension, the applications that drive revenue, manufacturing, customer satisfaction, etc. Perhaps a more appropriate term, then, would be “Application-Defined Data Center”?
2. SLA-Based Automation
The IT resources required to support applications – compute, network, storage, management – must be aligned to the nature and criticality of the specific business application. SLAs create the common language between IT and the business / application-owner. Of course, SLAs are not paper documents filed away for “checkbox” compliance. Automation of these SLAs is critical, in order to orchestrate the necessary resources to deliver on the agreed-upon terms of performance, availability, protection, resilience, retention, etc.
3. Heterogeneous Support
There are many products in the market today who’ve built an entire value proposition on the premise that a data center runs 100 percent virtualized (typically on VMware). Sure, there are some born-in-the-cloud businesses who have the luxury of a homogeneous infrastructure, but I’ve not met a single customer in the past several years who doesn’t have at least 25 percent of their applications running on physical systems. And, these are not corner-case, bought-in-the-90s-and-can’t-end-of-life-yet applications. These are bread-and-butter, mission-critical systems running state of the art versions of operating systems and databases. So, any Software-Defined (or Application-Defined) Data Center must be able to comprehend the entire application portfolio, not pick-and-choose those that map to a limited interoperability matrix.
I’ve left off some old chestnuts like “scalability” and “manageability” as I believe we’ve reached a tipping point in IT – in the face of the data volumes common in even the smallest business, these are table stakes for ANY IT-oriented product.
I’m hopeful that we can move quickly past the Software-Defined hype, and to the more important discussions regarding the architectural inhibitors to progress – particularly those brittle, legacy approaches to core IT processes. At the end of the day, shouldn’t it be the user that defines the data center and not the software? The end user should define how and when it wants its data. Maybe in 2014 we will see a transition to discussions around the “Application-Defined Data Center.” Until then, I’ll see you at Cloud-Based Coffee Machine. Maybe we can talk some Big Data while we’re there.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:00p |
Datapipe Opens Green HPC Cloud Node in Iceland  A look at one of the modular data halls at the Verne Global data center in ICelan.d Datapipe is expanding its presence at Verne with a new HPC cloud installation. (Photo: Colt)
Managed hosting provider Datapipe has launched a very green cloud node in Iceland. The company has made Stratosphere, its high performance computing (HPC) cloud platform available out of Verne Global’s facility, which uses 100% renewable energy.
Datapipe is an existing tenant, but its expansion is a reflection of increased data center activity in Iceland. Low cost, renewable energy, improved connectivity, and a location between North America and Europe add up to an enticing proposition.
“We have an enterprise customer base that demanded it,” said Ed Laczynski Senior Vice President of Cloud Strategy. “Iceland is a great in-between point with our U.S. and UK infrastructure, it’s great for disaster recovery.”
Datapipe has chosen Verne Global to launch some unique capabilities for its cloud platform, namely guaranteed IOPS and performance for storage. The company will roll out these features globally, but is featuring it first in Iceland.
Power Availability, Cost Work for Verne
Datapipe clients have immediate access to the new Iceland node. It’s available through the same portal as its other locations in Silicon Valley, the New York Metro area, Ashburn Virginia, London, Hong Kong, and Shanghai.
“Power availability and costs are becoming two of the leading constraints for HPC clouds and clusters,” said Jeff Monroe, CEO of Verne Global. “Together, Verne Global and Datapipe are meeting these challenges with the first truly green HPC Cloud for the European and North American markets.”
The Statosphere HPC cloud platform is a high performance solution targeted at Big Data wokloads. Typical verticals that the company attracts are manufacturing, financial service, and research and development. The platform is API driven and utilizes all SSD storage with guaranteed IOPS (input/output per second) . Stratosphere can be configured with public or private resources, with up to 32 physical core equivalents per instance, a half terabyte (TB) of RAM, and tens of thousands of IOPS per volume, all residing on a 10GE network. It’s known as the most widely deployed Apache CloudStack environment on the market.
Datapipe Committed to Green Computing
Datapipe has been committed to using as much renewable energy as it can, and is finding that customers are increasingly asking for it as well. “As we grow new solutions, we’re seeing more and more green qualifications as a requirement to do business,” said Laczynski. “The kind of customers we’re talking about really do care about this; these are multinational corporations looking for sustainable solutions.”
“From our perspective, we’ve been very impressed with Datapipe’s commitment to green energy worldwide,” said Verne Global CTO Tate Cantrell.
Verne Global’s data center campus pulls from a reliable power grid made up of 100% renewable energy. Verne Global uses geothermal and hydroelectric power sources, plus free-cooling provided by Iceland’s ambient air temperature. It can offer long-term predictable power costs. The company has been doing well, recently enlisting Colt to help it expand.
All of Datapipe’s data centers in the U.S. are powered by renewable energy as well, as part of the company’s commitment to the environment. Recognized as a Green Power Partner by the U.S. Environmental Protection Agency (EPA) since 2010, Datapipe achieved EPA Leadership Club status in 2011 and is currently ranked #9 on the EPA Green Power Partner Top Tech & Telecom list. | | 1:50p |
China’s Milky Way-2 Is World’s Top Supercomputer  A look at the Milky Way-2 (Tianhe-2) supercomputer from China, the new champion in the Top500 list of the world’s most powerful supercomputers.
Milkyway-2, a new supercomputer from China has blown the lid off of the semi-annual Top500 list of the most powerful supercomputers in the world. Announced at the International Supercomputing Conference in Leipzig, Germany Monday, at an astounding 33.86 petaflop/s the Chinese Milkyway-2 (also known as Tianhe-2) has almost doubled the performance of the November 2012 number one, Oak Ridge’s Titan, which falls to number two on the June 2013 list.
The new champ’s sister system, the Tianhe-1A at the National Supercomputer Center in Tianjin debuted at the number one spot on the Top500 list at 2.57 petaflops in November 2010, and took the tenth position with the same performance on the June 2013 list.
Developed by the Chinese National University of Defense Technology (NUDT) the new Tianhe-2 will provide an open platform for research and education, and is set to be online by the end of the year. Jack Dongarra from the Oak Ridge National Laboratory reported technical details of the TH-2 system on his visit to NUDT during an International HPC Forum (IHPCF) in Changsha China recently.
Intel Inside
There are 32,000 Intel Xeon E5-2600 v2 sockets and 48,000 Xeon Phi coprocessors for a total of 3,120,000 cores. The TH-2 system would represent the largest installation of Intel Ivy Bridge and Intel Phi processors. Phi is Intel’s Many Integrated Core (MIC) architecture for highly parallel workloads. Since it was released last year, it has been seen in several HPC projects, such as the TACC Stampede supercomputer. Stampede moved up a position on the June 2013 Top500 list, from number seven to number six.
Tianhe-2′s 16,000 nodes will also boast a lot of memory – with 88GB per node, for a total of 1.404 petabytes of system memory. A proprietary optoelectronics hybrid transport interconnect technology and global shared parallel storage system containing 12.4 petabytes round out the specifications.
Powering such raw compute power is no easy task. Much of the issue in the race to exascale is adequately powering and cooling such intensive loads. A greater emphasis on efficiency and total power consumption of the Top500 supercomputers has reduced loads some, but there is still work to do if exascale can be achieved. The Tianhe-1A supercomputer consumed 4 megawatts of power in 2010 and every top spot in the Top500 list since then has consumed even more power – although several in the top rankings have managed to perform with less power. The TH-2 system will have a peak power consumption under load for the system of 17.8 megawatts. If cooling is added, the total power consumption 24 megawatts. With a cooling capacity of 80 kW the cooling system used is a closed-coupled chilled watercooling with a customized liquid water-cooling unit.
The rest of the list
The Department of Energy’s Titan, a Cray XK7 system, remained at number two with 17.59 petaflop/s, and Sequoia dropped to number three. Moving up one spot on the June 2013 list the TACC Stampede supercomputer made significant improvements and clocked in at 5.168 petaflop/s. Vulcan – the fourth IBM BlueGene/Q system in the top 10, made a large jump from number 65 in November 2012, to number 8 on the June 2013 list, at 4.293 petaflop/s. Vulcan is a DOE supercomputer that debuted one year ago at number 48.
While China’s Tianhe-2 vaulted to the top, the U.S. still leads the Top500 list with 252 systems. Europe has 112, and Asia 119 systems. Eighty-eight percent of the systems use processors with six or more cores, and 67 percent with eight or more cores. Use of accelerators or co-processor technology went down slightly, with 39 using NVIDIA chips, 3 using ATI Radeon, and 11 systems using Intel Xeon Phi.
The International Supercomputing Conference will continue this week, and the conversation can be followed on Twitter hashtag #ISC13. | | 3:00p |
Veteran Team at 1547 Sees Value in Newer Markets  The exterior of 1547 Realty’s new data center development in Orangeburg, N.Y. (Photo: 1547 Realty)
What do Orangeburg, New York; Kapolei, Hawaii; and Cheyenne, Wyoming have in common? Fifteenfortyseven Critical Systems Realty (1547) saw tremendous opportunity in each of these very different markets. It’s a reflection of 1547′s penchant for thinking outside the box on data center location , and looking for tenant-driven data center opportunities in markets that have a huge upside down the road.
The data center industry is a fairly tight knit community, and 1547 arose out of the experience and desires of some of its titans. The company was founded by executives from the data center and financial industries.
- Managing Director Todd Raymond was at Telx, wearing many important hats there such as CEO, President, COO, General Counsel, Controller and a Director. Ultimately, what he wanted to do was focus on acquiring new sites, an expertise he harnessed at Telx prior to its acquisition by GI Partners.
- Co-founder Corey Welp was instrumental in assembling the team. More on the financial side of the pie, he has spent a lifetime building a network of investment advisors, and institutional asset managers, raising more than $4 billion for investment opportunities in various asset classes.
- Co-founders Jerry Martin and Pat Hines were at the Martin Group, which has focused on the data center space for the last decade, including multiple jobs for Telx.
The unique mix of finance and data center construction expertise positions 1547 to take very calculated risks in markets that aren’t necessarily proven, but are among the most promising.
“That’s not to say we wouldn’t look into key markets like Ashburn or Santa Clara,” said Raymond. “But we’re able to go into markets that have the necessary infrastructure in place and look at a facility and tell whether it’s a smart investment.”
1547 offers turn-key, build-to-suit, and powered shell data centers. Turn-key facilities are move-in ready available in multiple kW increments and are even sub-divisible down to a single Power Distribution Unit (PDU) to enable addition of capacity as needed. The team designs, builds and maintains its turnkey operations to ensure high availability and uptime.
The company has a ton of experience in due diligence and finance, insuring that TCO makes sense in markets that make sense. The company works with customers from concept to completion, willing to take the role that is necessary for a particular customer.
The company’s approach is reflected in its initial markets, which make clear that 1547 isn’t in the business of following the herd.
Orangeburg, NY
The 1547 team was involved in a site selection process, and found a perfect site for a customer in Orangeburg, but it wound up being too large for the customer. However, the site proved to be too attractive a proposition; it was to be the first acquisition of 1547.
Located on 32 acres at 1 Ramland Rd. in Orangeburg, New York, the property is in close proximity to and synchronous with Manhattan, Connecticut, and key facilities in New Jersey. Consisting of 232,000 square feet, 150,000 square feet of data center space, with 6 megawatts on site and 24 megawatts provisioned, it’s a sizeable play in a market the company believes will grow thanks to its proximity to major hubs. More info here
Hawaii Pacific Data
Located in Kapolei, Hawaii and adjacent to the Hawaii Pacific Teleport facility, Hawaii Pacific Data was an interesting move for 1547. There isn’t much news out of Hawaii when it comes to data centers, but 1547 recognized it as a potential key market going forward, given its position between AsiaPac and North America. “We spent the first couple of weeks answering requests for information (RFI) from a variety of content providers,” said Raymond. The appeal of caching content for faster delivery is one of the factors that might drive this market. The developer says it’s the only multi-tenant carrier neutral data center facility with satellite and subsea fiber backup capabilities in the world. Kapolei is on Oahu, known as the “second city” of the most populous island in relation to Honolulu. The big island is the one with the volcanoes. The poured concrete structure is 2 miles inland and 130 feet above sea level, placing it outside of flood and tsunami zones.
Cheyenne, Wyoming
A partnership with Green House Data led to the company’s most recent move. “This market was a customer-driven decision for us,” said Raymond. “It’s the perfect disaster recovery spot.” Data Center Knowledge recently covered the latest market for 1547.
Cheyenne has been in the data center news thanks to projects from Microsoft. State officials looking to court Microsoft meant that several attractive incentives are offered, hoping that Wyoming would turn into a major destination for data centers. It’s connectivity, cheap power, and tax incentives are all factors that 1547 found compelling. It turns out that Wyoming makes for a very compelling data center market in the Midwest.
The NCAR-Wyoming Supercomputing Center, a 171,000 square foot facility housing one of the world’s most powerful super computers, and a large EchoStar Broadcasting Corp data center are two major facilities in the area. Cobalt opened a data center there not too long ago.
Customer-Driven Into New and Exciting Markets
An anchor customer usually brings 1547 into a market, and it invests in a way where it has room to grow beyond this initial entrance. So where to next? Given the company likes to be innovators in markets that are unproven yet show enormous potential, the company isn’t saying.
“At any given time, we have 5 or so potential projects,” said Raymond. “If I were a betting man, I’d say we’ll have at least two more projects by the end of the year.”
Wherever those next projects pop up, chances are it will be in a location that makes you ask why there isn’t a larger data center market there yet. Customer desire combined with thorough research is what 1547 uses to uncover the next emerging markets. | | 8:51p |
A Visual Guide to the Top 10 Supercomputers 
The twice-a-year list of the Top 500 supercomputers documents the most powerful systems on the planet. Many of these supercomputers are striking not just for their processing power, but for their design and appearance as well. For a look at the top finishers in the latest Top 500 list, which was released earlier today at the ISC13 supercomputing conference in Leipzing, Germany, see our photo feature, The Top 10 Supercomputers, Illustrated. | | 9:37p |
Intel Advances Technical Computing With New Xeon Phi Products 
Intel (INTC) continued to fuel its growth in technical computing Monday, with big announcements of new Xeon Phi products, and that its E5-2600 v2 processor product family and Phi coprocessors have helped catapult China’s Tianhe-2 into the new number one position in the world for supercomputers. More than 80 percent of the current most powerful supercomputers in the world are powered by Intel processors.
New Phi Coprocessor Products
New Intel Xeon Phi Coprocessor products were announced, to provide what Intel refers to as neo-heterogeneity – a heterogeneous system with a single programming model. Intel announced the expansion of its current generation Intel Xeon Phi coprocessors with the addition of five new products that feature various performance options, memory capacity, power efficiency and form factors that are available immediately.
At the top end of Phi coprocessors is the 7100 family, with 61 cores clocked at 1.23GHz, 16GB of memory and over 1.2 teraflops of double precision performance. The Xeon Phi coprocessor 3100 family is designed for high performance per dollar value, and features 57 cores clocked at 1.1 GHz and 1 teraflops of double precision performance. Intel also added another product to the Intel Xeon Phi coprocessor 5100 family announced last year. Named the Intel Xeon Phi coprocessor 5120D, it is optimized for high-density environments with the ability to allow sockets to attach directly to a mini-board for use in blade form factors.
“Intel is helping to blaze a path toward new innovation, discovery and competitiveness with its supercomputing vision and products,” said Raj Hazra, vice president and general manager of Technical Computing Group. “There is an insatiable demand for more computing power while also achieving new levels of power efficiency. With the current and future generations of Intel Xeon Phi coprocessors, Intel Xeon processors, Intel TrueScale fabrics and software, Intel is uniquely equipped to deliver a comprehensive solution for our customers without compromise.”
Knights Landing
After a successful launch of the Xeon Phi coprocessor last year, Intel will evolve the product into its second generation, code named Xeon Phi Knights Landing. The new chip will use a 14nm process and be offered as either a host processor (CPU) or coprocessor. As a PCIe card-based coprocessor, “Knights Landing” will handle offload workloads from the system’s Intel Xeon processors and provide an upgrade path for users of current generation of coprocessors, much like it does today. As a stand alone host processor it will enable the next leap in compute density and performance per watt, handling all the duties of the primary processor and the specialized coprocessor at the same time. |
|