Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, January 16th, 2013
| Time |
Event |
| 12:30p |
Cray Supercomputer Powers German Weather Service  The new Cray XC30 supercomputer. (Photo: Cray Inc.)
Cray announced it has been awarded a $23 million dollar contract to provide two Cray XC30 supercomputers and two Cray Sonexion 1600 storage systems to Germany’s National Meteorological Service — the Deutscher Wetterdienst (DWD).
The new systems will enable DWD to produce higher resolution and more accurate global and regional weather forecasts. The two Cray Sonexion 1600 storage systems that will be deployed at DWD will have a combined capacity of more than 3 petabytes of storage and 72 gigabytes per-second of combined bandwidth.
“DWD is one of the world’s most prestigious numerical weather prediction centers, and we’re honored to provide them with the supercomputing technologies necessary for delivering such an extensive range of important services,” said Dr.Ulla Thiel, Cray vice president, Europe. “We are looking forward to building a strong collaboration and close partnership with DWD. This contract is yet another example of how we continue to expand our presence in the meteorological community in Europe and across the globe.”
Previously code-named “Cascade,” the XC30 was introduced last November and features the Aries system interconnect, a new Dragonfly network topology that frees applications from locality constraints, and a cooling system that utilizes a transverse airflow to lower customers’ total cost of ownership. Consisting of products and services, the multi-year, multi-phase contract is valued at more than $23 million, and the systems are expected to be delivered and put into production in 2013 and 2014.
“At our national meteorological service, we are responsible for providing services for the protection of life and property in the form of weather and climate information,” said Dr. Gerhard Adrian, President of DWD. “This is the core task of the DWD, and thus it is imperative that we equip our researchers and scientists with scalable, productive, and above all, highly reliable supercomputing systems. The Cray XC30 supercomputers will be valuable resources for us, and we are pleased to be working with Cray.” | | 1:00p |
Pets in the Clouds: CyberlinkASP Cloudifies Veterinary Records 
It’s not just your data that’s moving to the cloud. Now your pet’s records can be cloud-enabled as well.
CyberlinkASP has entered into a partnership with Animal Intelligence Software, Inc. to provide cloud-based software to manage veterinary practices. CyberlinkASP is providing turnkey virtual private cloud and Citrix-based technologies to support mission critical IT for Animal Intelligence.
Animal Intelligence Software is an Electronic Medical Records that puts pet care into the cloud. A veterinarian can now set up office records management to run in the cloud if they have an internet connection, instead of setting up hardware and software at their office. Cloud computing is a boon to these types of systems, making them accessible to a wider audience, and making managing a practice generally easier. Records are accessible on any device, so a doctor can pull out an iPad for reference. The software can also be hosted on-site, but the portability and accessibility of the cloud setup has more appeal.
“CyberlinkASP’s approach is seamless and turnkey,” said Dr. Thomas L. Driver, President and CEO of AI Software. “The ability to access our desktop and software from anywhere, any time was powerful. We conducted a rigorous review before selecting CyberlinkASP.”
Why Veterinary Records in the Cloud?
Pets are awesome and they mean a lot to their owners. I’d jump in front of a car for my French Bulldog. I’m not kidding.
Keeping records in the cloud means that these records are safe, portable and accessible, enabling better pet care overall. The same benefits of storing digital health records for people can be achieved by putting pet info in the cloud, and there isn’t nearly as much red tape and hurdles with pets as there are with humane medical records.
Software like AI helps startup and smaller practices keep better records, and makes it easier for those records to be accessed from multiple clinics. It helps keep pets up to date, providing automated prescription reminders, helping the doctors calculate doses, and much more.
AI Software is a doctor centered electronic medical record (EMR) practice management system that increases the profits of most practices approximately 30 percent by reducing missing charges alone.
Specialized Software Systems Find a Home in the Cloud
This is another case of software finding a home in the cloud, with CyberlinkASP serving as the provider that brought something useful into the cloud world.
“Our cloud-based virtual desktop solutions continue to gain acceptance across the enterprise spectrum as companies of all sizes realize the benefits, cost savings and scalability,” said Mason Cooper, Vice President of Information Technology, CyberlinkASP. ”We currently support hundreds of applications and thousands of users all over the world. Animal Intelligence Software is the premier provider in their space, and we are honored to partner with them.”
CyberlinkASP is a managed services firm providing hosted virtual desktops, private cloud hosting, and security services. It operates data centers in Dallas, Chicago and London. Its clients include financial institutions, health care providers, insurance companies, cargo logistics businesses, and payment card processors among others.The company has been around since 1999, providing hosted and cloud solutions. | | 1:30p |
US Power Grid Has Issues with Reliability Lisa Rhodes, Vice President of Marketing and Sales, Verne Global, which owns and operates a data centre campus in Iceland.
 LISA RHODES
Verne Global
The threat of losing power is among the top stressors for industries such as data centers, hospitals and universities. It’s never a good moment when operators get the dreaded phone call saying power has been lost; unfortunately, it seems as though it’s starting to happen more frequently than ever.
According to CNN, over 140 million customers rely on the three interlinked sectors of the more than 3,200 electric distribution utilities, 10,000+ generating units and tens of thousands of miles of transmissions that make up the United States’ power grid. With an average of nearly 500,000 people affected daily by U.S power outages, it is safe to say that the power grid is reaching its capacity and weakening with the age and declining infrastructure as its main culprits. Experts are worried and it’s with good reason. With the yearly costs of U.S. outages running into the billions, the unease and unpredictability of the infrastructure as well as the lack of physical security, has caused some uncertainty among large-scale power users, such as data centers. As a result, data centers are being force to think outside of the box and become innovative with alternative power sources.
In the past year alone, there have been documented instances, such as with Superstorm Sandy, where the power grid weaknesses were exposed or where officials, such as those in Texas, stepped in to prevent a possible blackout. As a result, several industries, including data centers, were forced to limit or shut down operations, causing massive problems from customers and the public. As one of the largest energy consumers, data centers are among the first to feel the pressure of a waning grid. From lack of connectivity to generator failures and everything in between, the aging power grid is threatening to hurt the data center industry in more ways than one.
Aging Power Infrastructure
The original power grid pathways—similar to a highway system—were built in the early 20th century. Additionally, many utility companies have structures that have been running for 50 to 70 years. Unfortunately, the infrastructure age is causing further problems to the weakening grid. When first built, the lines were adequate; however, as time has progressed there are multiple areas of weakness that have started to show up, causing the uncertainty.
At the current rate the grid is falling, the existing competencies won’t be able to stand up to the future needs without a billion dollar price tag to make the necessary upgrades possible. By pushing systems harder than they’ve been pushed before, the grid can be held accountable for several blackouts in the previous decade- including the infamous 2003 northeastern blackout and the most recent Superstorm Sandy.
According to a report from Mary Meeker, there are over 2.4 billion Internet users worldwide with the number expected to increase 8 percent yearly. In the United States alone, the number of users is over 244 million with a projected 3 percent yearly increase. With the rapid growth in the coming years, data center operators are under increasing pressure to ensure their facilities stay online. Customers demand the ability to access data at any given time and campuses can’t let them down due to power failure. For example, campuses in the Tri-State area went on high alert for preparedness when Sandy hit; however nothing could assist them when their servers, generators and other critical powering devices failed as a result of the utility power loss. Throughout the storm, companies lost the ability to access mission critical documents and popular news websites were down; all of which had data center managers scrambling for solutions. Nearly two months after the storm and Ellis Island is still suffering without power, causing nearly all its historical artifacts to be removed and kept in safe keeping. | | 2:00p |
Juniper Outlines Vision for ‘Major Shift’ to SDN At its annual Global Partner Conference Juniper Networks (JNPR) announced a comprehensive vision to transition enterprises and service providers from traditional network infrastructures to software-defined networks (SDN) and outlined its strategy to lead the SDN market.
“SDN is a major shift in the networking industry,” said Bob Muglia, executive vice president, Software Solutions Division at Juniper. “At Juniper, we think the impact of SDN will be much broader than others have suggested. It will redefine networking and create new winners and losers.”
In seeking to position itself among the winners, Juniper will focus on six principles that address key challenges that customers face with networks today. The six principles are:
- to cleanly separate networking software into management, services, control and forwarding layers
- to centralize the appropriate aspects of those layers to simplify design
- to use the cloud for elastic scale and flexible deployment
- to create a platform for network applications, services and integration into management systems
- to standardize protocols for interoperable, heterogeneous support across vendors
- to broadly apply SDN principles to all networking and network services.
Juniper defines a set of four clear steps that can enable customers to start taking advantage of the promise of a SDN-enabled network.
Step one is to centralize network management, analytics and configuration functionality to provide a single master that configures all networking devices. The second step is to create service virtual machines for extracting networking and security services from the underlying hardware. Step 3 introduces “SDN Service Chaining” — using software to virtually insert services into the flow of network traffic. It is a centralized controller that enables multiple network and security services to connect in series across devices within the network. The fourth step involves optimization of the usage of network and security hardware to deliver high performance.
While SDN steps one through three enable new network and security capabilities, optimized network and security hardware will continue to deliver 10 times or better performance for critical networking functions than can be accomplished in software alone.
“We’re embracing SDN with clearly defined principles, a four-step roadmap to help customers adopt SDN within their business, and the networking industry’s first comprehensive software-centric business model,” said Muglia. “We’re incredibly excited about the value that SDN will deliver to our customers and are committed to leading the industry through this transition.” Muglia discusses Juniper’s SDN strategy in a blog post.
New Licensing Model
Juniper also announced a new software licensing and maintenance model that enables customers to exploit software value over time. It allows the transfer of software licenses between Juniper devices and industry-standard x86 servers, and allows customers to scale their purchases based on actual usage. Juniper’s Brad Brooks discusses the new Juniper Software Advantage licensing, and value creation with SDN.
“SDN is frequently discussed in narrow terms rather than holistically, with solutions focused mostly in the forwarding and data planes, said Vernon Turner, senior vice president research at IDC. ”Juniper’s approach is one of the most comprehensive that we’ve seen to date from any networking provider — from both a technology and business model perspective.” | | 2:21p |
Symform: The World’s Largest Virtual Data Center? There’s more investment in the cloud backup space, and Symform wants to use some of that funding to create the world’s largest virtual data center. And parts of it may live in the office of your local real estate professionals.
Cloud backup service provider Symform said today that it has added Second Century Ventures (SCV) to the ranks of its strategic investors. SCV is the venture capital fund of the National Association of Realtors (NAR).
Symform has a decentralized take on cloud storage, providing a global cloud storage network consisting of excess space from all of its users. The company says its Global Cloud Storage Network now reaches active users in 160 countries, with Symform storing over 7 billion data fragments.
The SCV financing will power product development and accelerate adoption. “This investment is an important validation of our model, and is additional momentum as we build the world’s largest virtual datacenter,” said Matthew Schiltz, CEO of Symform.
World’s Largest Virtual Data Center?
Symform targets the mid-market and IT resellers. It’s a distributed and crowd-sourced free online storage service, where users pay with bytes instead of bucks. A business on the network contributes excess local drive space to the grid in exchange for backup. Data leaving source devices is encrypted and shredded, redundancy is added, then it is geo-distributed across the global network – hence a virtual data center.
SCV joined Symform’s $11 million Series B round of funding from 2012, which included financing from Longworth Venture Partners, OVP Venture Partners, and WestRiver Capital. The strategic partnership with SCV puts Symform unlimited cloud storage offerings into the hands of the NAR member base, the largest trade association in the United States.
“Second Century Ventures is committed to identifying and helping develop technology solutions that help our Realtor members maintain a competitive edge while growing their businesses,” said Dale A. Stinton, president of SCV and NAR chief executive officer. “As the volume of electronic information continues to grow, proper data backup and protection is becoming mission-critical for all businesses everywhere. Symform is a great fit in our investment portfolio and is also tailor-made for Realtors, who have a strong need for a cost-effective cloud data backup solution.”
As part of the investment, Constance Freedman, managing director of Second Century Ventures, will join the Symform Board of Directors. As managing director at SCV, Freedman manages all aspects of the fund, from cultivating investment opportunities to helping portfolio companies achieve their strategic goals. | | 3:00p |
Time-Lapse: A Supercomputer in 90 Seconds This time-lapse video shows the construction of the Fujitsu Primergy high-performance cluster under construction at the National Computational Infrastructure at the Australian National University in Canberra. The Primergy came in at No. 24 in the world on the Top500 list of fastest supercomputers. This cluster is constructed based on the technology developed for the ‘K’ computer in Japan, which was until recently the world’s fastest computer. (See DCK coverage of the Top500 List released in November.)
Here’s how this cluster’s stats compare to a typical PC are as follows:
- 57,000 cores = 15,000 home PCs
- 160 terabytes of RAM = 40,000 home PCs
- 10 petabytes of hard disc = 10,000 PC hard drives
- 1,200 teraflops of peak computational performance = 5 months worth of calculations by 1 billion people armed with calculators, in just 1 second.
- 9 terabyes of network = 9 million home internet bandwidth connections
Enjoy this time-lapse of the construction!
For more on supercomputing, bookmark our HPC channel. For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube. | | 4:00p |
Emerson Adapts Open Compute, Eyes HyperScale Market In the first two years of the Open Compute Project’s initiative to bring open design standards to hyperscale data centers, vendors of power and cooling products have been notable for their absence. Not so with the 2013 Open Compute Summit, which begins today in Santa Clara.
Emerson Network Power will be on hand at the summit to show off a rack solution that integrates power distribution and back-up concepts in the Open Rack specification, created with off-the-shelf components, the company said. Emerson is also launching a new consulting initiative that will target operators of large cloud environments for search and social networking.
Emerson’s presence reflects the growing impact of the Open Compute Project (OCP), which is building enough momentum that the largest vendors in the data center equipment space are paying attention. OCP’s initial focus was on server and data center design. That’s why HP, Dell, Intel and AMD participated in the project’s 2012 summit, and Digital Realty Trust and DuPont Fabros technology are working to support Open Compute designs in their wholesale data center space.
Open Compute Focus Extends to Racks
It appears to be OCP’s introduction of the Open Rack that has captured the attention of Emerson Network Power, a leading player in data center power and cooling that sells lots of cabinets and rack containment systems.
The Open Rack provides a 21-inch wide slot for servers, expanding upon the 19-inch width that has long been the standard for data center hardware. The wider form factor will create more room for improved thermal management, as well as better connections for power and cabling. Power supplies are now separate from the server motherboards and reside in a “power shelf” at the base of the rack, where they tie into the busbar at the rear of the unit.
Scott Barbour, the business leader for Emerson Network Power Systems, said Emerson’s expertise in power distribution, cooling and infrastructure management positions the company to be a player in adapting Open Compute designs.
“The hyperscale solutions team is made up of individuals with decades of data center experience and draws upon the knowledge and capabilities of thousands of Emerson technologists and engineers to support the visions of data center professionals,” said Barbour. “And only Emerson has the global scale and resources to rapidly implement those visions.” | | 6:00p |
AMD Rolls Out Open Compute Servers for Wall Street  Here’s a closer look at an AMD Open 3.0 server, a 3U unit built by Quanta Computer. (Photo: AMD)
If Facebook and the world’s largest financial services companies got together to build a server, what might it look like? The answer can be seen in new servers being unveiled today based on the AMD Open 3.0 platform, which was developed through the Open Compute Project.
The hardware, which AMD calls “a radical rethinking of the server motherboard,” will be on display today at the 2013 Open Compute Summit in Santa Clara, Calif. It is also being evaluated in data centers at Fidelity Investments and Goldman Sachs.
“This is big for AMD, big for customers and big for the data center,” said Bob Ogrey, Cloud Technical Evangelist for AMD. “We’re pretty excited. It’s a paradigm shift in the server space. What’s really important is that this is the first time a platform has been rolled out for Open Compute isn’t targeted at Facebook’s data centers.”
Collaborating With the Financial Community
The Open Compute Project (OCP) was launched in April 2011 to publish data center designs developed by Facebook for its Prineville, Oregon data center. AMD 3.0 is an outgrowth of the first Open Compute Summit in New York in the fall of 2011, when technologists from AMD and several large New York financial institutions began discussing how to adapt the “open source hardware” designs for the financial community.
A prototype known as “Roadrunner,” was demonstrated last May at the second Open Compute Summit in San Antonio. This year AMD is back with a refined platform that features multiple configurations for HPC, cloud computing and storage. AMD has built a new motherboard, which serves as the foundation for new servers built by original design manufacturers Tyan and Quanta Computer, which will then be marketed by systems integrators including Avnet Electronics Marketing and Penguin Computing.
“We became involved with the Open Compute Project very early as we saw a pervasive demand for simplified, energy efficient servers,” said Suresh Gopalakrishnan, corporate vice president and general manager, Server, AMD. “Our goal is to reduce data center power consumption and cost yet increase performance and flexibility– we believe that AMD Open 3.0 achieves this.”
Realizing Open Compute’s Mission
“This is a realization of the Open Compute Project’s mission of ‘hacking conventional computing infrastructure,’” said Frank Frankovsky, Chairman of the Open Compute Foundation and VP of Hardware Design and Supply Chain at Facebook. What’s really exciting for me here is the way the Open Compute Project inspired AMD and specific consumers to collaboratively bring our ’vanity-free’ design philosophy to a motherboard that suited their exact needs.”
AMD Open 3.0, powered by the recently announced AMD Opteron 6300 Series processors, and can be installed in all standard 19-inch rack environments without modification, as well as Open Rack environments. The AMD Open 3.0 motherboard is a 16” x 16.5” board designed to fit into 1U, 1.5U, 2U or 3U rack height servers.
The motherboard features two AMD Opteron 6300 Series processors, each with 12 memory sockets (four channels with three DIMMs each), 6 Serial ATA (SATA) connections per board, one dual channel gigabit Ethernet NIC with integrated management, up to four PCI Express expansion slots, a mezzanine connector for custom module solutions, two serial ports and two USB ports. Specific PCI Express card support is dependent on usage case and chassis height.
“We have eagerly awaited the AMD Open 3.0 platform as it brings the benefits and spirit of the Open Compute Project to a much wider set of customers,” said Charlie Wuischpard, CEO of Penguin Computing. “As we deliver a new line of Penguin servers based on AMD Open 3.0 and AMD Opteron 6300 processors, our high performance computing, cloud, and enterprise customers can now deploy application specific systems using the same core building blocks that are cost, performance, and energy optimized and perhaps most important, consistent. We think this initiative eliminates unnecessary complexity and provides new levels of supportability and reliability to the modern data center.”
Working Together With Intel
The effort led to unusual collaborations. The AMD team worked with rival Intel on standardizing components of the design, incouding the mezzanine connector. “In the 15 years that I’ve been doing servers, it’s the first time we’ve got on the phone with the guys with Intel,” said Ogrey. “That’s pretty exciting.”
The motherboard has a “T” shape to acommodate financial services firms’ requirements for power supplies. Some place the power supply on the left, Ogrey said, while others place it on the right and still others have dual power size on either side for redundancy. The motherboard is shown below.
 | | 6:30p |
Calxeda Rolls Out ARM-Based Open Vault Storage The Open Compute Project is getting some ARM-powered hardware, but perhaps not where you’d expect. Calxeda today introduced Project Knockout, which teams its low-power processors with the Open Vault storage system.
ARM processors are best known for their use in mobile devices like the iPhone and iPad. Austin-based Calxeda is adapting them for servers. With Project Knockout, it has developed a motherboard that can be installed in the Open Vault storage tray, eliminating the need for a separate server to control the disks.
Calxeda is demonstrating Project Knockout at today’s Open Compute Summit in Santa Clara, Calif. It’s one of several companies showing off ARM technology, as Applied Micro is also presenting at the summit.
“Project Knockout injects more compute in customers’ storage tier, putting efficient processing close to the data,” said Karl Freund, VP Marketing for Calxeda. ““We are honored to have the Open Compute Project single out our contribution to Open Vault today at OCS.”
Calxeda is also working with Avnet on ARM-based contributions to the Open Compute Project that will leverage Avnet’s data center technologies. The co-developed solutions are expected to be available to the OCP community Fall 2013
“Partners like Calxeda are critical to bringing creative new design options to the Open Compute Project community and we applaud their technical contributions to the project,” Frankovsky said. | | 10:17p |
Open Compute: Momentum Builds for Open Hardware  At the Open Compute Summit 2013, Facebook’s hardware guru, Frank Frankovsky, displays a board showing common sockets, compliant with the Open Compute Common Slot Specification. It is affectionately known as the “Group Hug” board. Processors from AMD, Applied Micro, Calxeda and Intel can all work with this board. (Photo: Colleen Miller)
SANTA CLARA, Calif. – Perhaps the best sign of the progress made by the Open Compute Project is that companies as diverse as Rackspace Hosting, Fidelity Investments and Goldman Sachs are all running servers based on these “open hardware” designs in their data centers.
In less than two years, the Open Compute Project (OCP) has grown far beyond its origins as a showcase for Facebook’s design innovations, evolving into an active community building cutting-edge hardware, disrupting the traditional IT supply chain, and laying the groundwork for future innovation.
More than 1,900 technologists gathered today for the Open Compute Summit at the Santa Clara Convention. That’s three times the size of the last summit in May 2012. They packed an expo floor filled with hardware – real, working hardware based on open designs. Meanwhile, the industry’s thought leaders took the stage to outline new designs that could accelerate the pace of change in the world’s largest data centers.
“The momentum is continuing to build,” said Frank Frankovsky, Chairman of the Open Compute Foundation, and VP Hardware Design & Supply Chain at Facebook. “It’s really amazing to see the turnout today.”
Open Designs, Changing Supply Chain
Hardware and data centers have historically been the stronghold of secrecy and proprietary designs. The formation of Open Compute advanced the promise of open standards for servers, storage, racks and data center design. If it seemed a pipe dream at the time of its launch, the OCP has quickly become real, with some of the U.S. tech industry’s largest players – including Intel, AMD and Facebook – rolling out new products.
It has also given a higher profile to a new supply chain featuring original design manufacturers and companies developing low-power processors. A dozen new organizations have joined OCP since the last summit, including storage companies like EMC, Fusion-io, Hitachi, and Sandisk and microprocessor organizations like Applied Micro, ARM, Calxeda, and Tilera.
Several presentations at today’s summit focused on new innovations with the potential to change the face of the data center.
“One of the challenges we face as an industry is that much of the hardware we build and consume is highly monolithic — our processors are inextricably linked to our motherboards, which are in turn linked to specific networking technology, and so on,” said Frankovsky. “ To fix this, we need to break up some of these monolithic designs — to disaggregate some of the components of these technologies frm each other so we can build systems that truly fit the workloads they run and whose components can be replaced or updated independently of each other.”
Several OCP initiatives illustrate this opportunity:
- The “Group Hug” Motherboard: Facebook is contributing a new common slot architecture specification for motherboards. This specification, nicknamed “Group Hug,” can create vendor-neutral motherboards with the ability to last through multiple processor generations. The specification uses a PCIe x8 connector to link the SOCs to the board. “Why don’t we have a common socket?” Frankovsky asked. “All the surrounding bits are the same.”
- Silicon Photonics: Intel is contributing designs for its forthcoming silicon photonics technology, which will enable 100 Gbps interconnects, enough bandwidth to serve multiple processor generations. This technology’s ultra low latency could allow components that previously needed to be bound to the same motherboard to be spread them out within a rack. ”We’ll be able to do things in the data center that we’ve never been able to do before,” said Intel CTO Justin Rattner. ”This is really a remarkable breakthrough,” added Andy Bechtolsheim, founder and chairman of Arista Networks and a member of the OCP board.
- New Processors and SoCs: AMD, Applied Micro, Calxeda, and Intel have all announced support for the Group Hug board, and Applied Micro and Intel have already built mechanical demos of their new designs.
Frankovsky said the hardware, adoption and innovation on display at the summit demonstrated the power of open collaboration.
“We’re building something cool that’s unique to the industry,” said Frankovsky. “This is a consumer-led organization. I think suppliers are starting to listen more. We’re designing and delivering in the open. By doing this in the open, we’re creating a virtuous cycle.”
That approach got the enthusiastic approval of keynote speaker Tim O’Reilly, a leading light of the open source community, who reviewed the history of how free and open source software had changed the face of computing.
“The important frontier for open source is the cloud,” said O’Reilly. “What you guys are engaged in is the invention of the future. I’m so proud that you’re taking the future into your own hands.”
 Intel CTO Justin Rattner,who is an Intel Senior Fellow, Corporate Vice President and director of Intel Labs, and Andy Bechtolsheim, chairman of Arista Networks, discuss the potential of silicon photonics and how they can be used to speed intra-rack communications. (Photo: Colleen Miller) |
|