Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, May 1st, 2014
Time |
Event |
12:00p |
Dublin officials green-light Google data center Google is expanding in Ireland and has recently received municipal approval by Dublin City Council for its new €150 million (around $205 million US) data center in west Dublin. The company first announced it was planning to construct a new 30,361 square meter (326,803 square feet) last February.
The new data center is expected to be twice the size of its existing € 75 million data center opened in Dublin in 2012, the Irish Times reported. The facility will be two stories high and reside next to its existing facility in Clondalkin, Dublin 22. The development will create up to 300 construction jobs as well as 60 full-time jobs once operational.
Google chose Dublin for its data centers because it believes the location has the right combination of energy infrastructure, available land and workforce.
Dublin is unique amongst major European data center hubs in that its appeal is based on climate, rather than connectivity. While the thriving data center communities in London, Amsterdam and Frankfurt are built atop network intersections in key business cities, Dublin has become one of the world’s favored locations for free cooling – the use of fresh air to cool servers.
It is a prime example of how free cooling is giving rise to clusters of energy-efficient facilities in cool climates.
Google continues investing significantly in expansion worldwide, after spending $7.3 billion on its data centers in 2013.
Also in Dublin, Microsoft announced plans for a $230 million expansion of its existing cloud hub in December, where it has now invested more than $806 million (€594 million). Microsoft is several miles down the road from Google’s Dublin campus in Profile Park, where it completed a $100 million expansion in 2012. | 12:30p |
NERSC Contracts with Cray for $70 Million Supercomputer Cray has been awarded a $70 million contract to provide a next-generation Cray XC supercomputer to the U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing Center (NERSC). The new system, named Cori after bio-chemist and Nobel Laureate Gerty Cori, will help support more than 5,000 scientists annually on more than 700 projects in the areas.
NERSC is also home to a Cray XC30 supercomputer named “Edison” and a Cray XE6 supercomputer named “Hopper.” The new Cori system is expected to deliver 10 times the sustained computing capability of the Hopper supercomputer.
“We are proud of the history we’ve built with NERSC, and we are honored that the Center, along with the Department of Energy’s Office of Science, has once again turned to Cray to support the future computational needs of their large user community,” said Peter Ungaro, president and CEO of Cray. “This is a significant contract for our company as it demonstrates that our roadmap for the Cray XC family will continue to lead the industry well into the future. The researchers and scientists at NERSC are focused on solving a wide range of challenging problems that demand high levels of performance and reliability across a broad spectrum of scientific applications. We look forward to working with NERSC and putting our future technologies to the test as part of the Department of Energy’s leading-edge scientific research and discovery program.”
The multi-year contract has the Cray XC being delivered in 2016, and includes a 400 gigabytes per-second, 28-petabyte Cray Lustre File System storage solution. Additionally, NERSC has the option to purchase solid state storage integrated in the Cray XC supercomputer for extremely high performance burst IO.
“We are excited to continue our partnership with Cray,” said Sudip Dosanjh, Director of the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory. “Cori will provide a significant increase in capability for our users and will provide a platform for transitioning our very broad user community to many core architectures. We will collaborate with Cray to ensure that Cori meets the computational and data needs of DOE’s science community.”
With over 9,300 Knights Landing compute nodes, the Cori supercomputer will also feature an option for a “Burst Buffer,” a layer of NVRAM that would move data more quickly between processor and disk, allowing users to make the most efficient use of the system while saving energy. The Knights Landing processors will have over 60 cores, each with multiple hardware threads with improved single thread performance over the current generation Xeon Phi co-processor. The processors also features on-package high bandwidth memory that can be used either as a cache or explicitly managed by the user.
“We are thrilled to work with Cray in bringing the next generation of highly parallel supercomputers to market based on the Intel Xeon Phi processor – codenamed Knights Landing,” said Charles Wuischpard, Intel’s Vice President, Data Center Group and General Manager, Workstation and High Performance Computing. “Working closely with Cray, we will deploy the Many Integrated Core (MIC) architecture on the next generation Cray XC supercomputer, delivering over 3 teraflops of performance per single socket node to power a wide set of applications and taking an important and viable step towards Exascale.” | 1:00p |
Experts: Electromagnetic Interference Threat to Uptime is Real Electromagnetic pulse (EMP) and intentional electromagnetic interference is a serious threat that has not been in the forefront of the conversation within the data center industry but the industry is becoming increasingly aware, according to experts who delivered a keynote at the Data Center World conference in Las Vegas, Nev., Wednesday.
George Baker, CEO of BAYCOR, noted that with the proliferation of cloud computing, more data is being placed in fewer baskets, and that reliance on failover sites has reduced physical security. The problem EMPs present could be continental in scale. “Do not depend on electric utilities for protection,” he said.
A backup data center is normally just 60 miles or less down the road from a primary one, which would not necessarily protect against EMPs.
“The common response is ‘it can’t happen.’ ,” Michael Caruso, director of government and specialty business development at ETS-Lindgren, said. “It can and will happen.”
Regulations need to be pushed, he said, and the utilities are not anxious to regulate themselves. There has been some progress, however, with a few individual utilities taking steps.
While EMP attacks have not been common in the past, there are critical points of failure that could cause an attack to be catastrophic. The increasing importance of data centers to North American infrastructure means providers need to do better at both pushing regulation of utilities and protecting the data residing within their walls.
A lot of money has already been invested in deterring flicker sag, harmonic distortion and other electrical problems that can induce significant costs for data centers. However, intentional electromagnetic interference goes beyond most of these protections.
While the North American power grid is fairly resilient, there are several issues that pose threats down the road. There have been several official US government studies that address electrical reliability in the case of solar storms, high-altitude EMP (a nuclear explosion in high atmosphere) and coordinated terrorist attacks.
A particular vulnerability that concerns the government is the threat to large power transformers. They are the size of small houses and are vulnerable and difficult to transport, which often involves special provisions, as well as reinforcement of roads and bridges.
A major issue is that there are three basic interconnections in the power grid in North America with no single entity in charge. This is a major vulnerability.
There are also nine critical substations that, if taken down, would create a cascading effect taking down a big portion of the grid.
Different Ways to Fry Electronics
An EMP’s impact can range from a few feet to 10 kilometers. A suitcase version can be built for less than $200, and truck-mounted EMP will cost about $2,000.
An EMP can deliver a blow of 10,000 volts to 50,000 volts per meter. Common IT equipment immunity standards protect it to just 10 volts per meter.
A Boeing Champ cruise missile can fly around turning off the lights below its path, while an unpredictable burst caused by a solar storm can wreak havoc on infrastructure.
Protecting the Data Center
Shield enclosures, power and signal line filters and RF-tight doors are all methods of protecting the data center, but it is not cheap. For a greenfield project, such protection may result in a five- or eight-percent cost increase.
There are two defined levels of protection in the data center:
Level 1: Shielded electrical infrastructure, where the host facility goes down but equipment is not affected
Level 2: All-inclusive protection, where all points of entry and backup power are protected. Multiple sources for power and cooling, HVAC and generators are included in the protection scheme. | 2:00p |
How to Get the Most Value from Your Cloud Provider As cloud computing continues to impact the modern organization, businesses will need to look at how they deploy their IT environment – and where. While the use of the cloud addresses many challenges often faced by IT departments, there are two little known pitfalls of the cloud.
First, the “Perceived Performance Paradox”: many cloud providers are seemingly comparable because they sell similar services, but actually differ greatly when it comes to underlying hardware architecture and performance. Second, “The Goldilocks Effect:” the common industry practice of offering resources in pre-packaged bundles, rather than allowing customers to determine their own needs. In other words, hosting providers don’t generally offer the resource quantity that’s “just right.”
In this whitepaper from Expedient – we quickly find out the right ingredients for a solid cloud deployment and what it takes to partner with the right colocation and cloud provider.
Here’s the important point to understand: in today’s market, providers too often back users into a corner with limited options, when they should be acting as a partner by tailoring a solution. The objectives of the person responsible for cloud initiatives in an organization are resource allocation, 100 percent uptime availability and lower overall costs, yet providers traditionally prepackage resources and market products as “easily consumable.” Well-known providers often offer resources immediately upon entering credit card information, but how well does their solution fit your need? Furthermore, what should you look out for when creating that long-term strategic partnership?
Download this white paper today to learn what it means to partner with a strategic cloud provider – versus one that’s just trying to give you resources. As the paper outlines, there are several key factors to look out for in looking for a good cloud partner. This includes:
Providing enterprise-grade environment
- N+2 Redundant hardware
- Operational approach complements a variety of industry and government compliance requirements including SOX, PCI DSS and HIPAA, supported by third-party SOC attestation
- Geographically diverse cloud locations
Measureable performance
- Ability to monitor network capacity (Mbps), memory capacity (GB), storage capacity (GB), disk I/O (IOPS) and CPU utilization in real time
- 100% uptime SLA
Assures optimal interoperability between virtual and physical platforms
- Colocation and cloud services located in the same facility, interconnected by high capacity bandwidth for seamless interoperability
Guaranteed security and compliance
- Robust procedure in place to address compliance issues
- Server platforms based on Intel® Xeon® processors that factor hardware-based Intel® AES-NI strong encryption protocols into computing transactions
Timely, quality support staff
- 24x7x365 on-site technical support
- Multiple available data centers positioned in varying geographical locations to offer redundant failover
- Ability to switch service to a different facility without customer-facing interruption in the event of a major service issue
Remember your cloud and data center are the direct drivers for your business. When selecting a cloud or colocation provider – remember, you’re in it for the long-run. This means working with a strategic partner that can scale dynamically with the needs of your data center as well as with the goals of your organization. | 4:59p |
Top Ten Data Center Stories, April 2014 Microsoft’s big moves in the Midwest and Southwest captured the attention of Data Center Knowledge readers in April. Also, life-safety issues such as a fire and a bomb threat at two separate data center facilities were the topic of two other most popular stories for the month, followed by our look inside Apple and Facebook data centers. Here are the most viewed stories on Data Center Knowledge for April 2104, ranked by page views. Enjoy!
Stay updated on data center news – Subscribe to our RSS feed and daily e-mail updates. Follow us on Twitter or Facebook or Google+. | 5:30p |
Google Shows off POWER Server Motherboard Google has developed a motherboard using POWER8 server technology from IBM, and is showing it off at the IBM Impact 2014 conference in Las Vegas this week. The new motherboard is an outgrowth of Google’s participation in the OpenPOWER Foundation, a non-profit developing data center technology based on the POWER Architecture.
Google’s display of a POWER8 motherboard is notable because the company builds its own servers by the tens of thousands, and POWER could represent an alternative to chips from Intel, which is believed to provide the motherboards for Google’s servers. Gordon MacKean, senior director of hardware for Google and chairman of the OpenPOWER Foundation, shared an image of the motherboard on Google+ this week.
“We’re always looking to deliver the highest quality of service for our users, and so we built this server to port our software stack to POWER (which turned out to be easier than expected, thanks in part to the little-endian support in P8),” MacKean writes. “A real server platform is also critical for detailed performance measurements and continuous optimizations, and to integrate and test the ongoing advances that become available through OpenPOWER and the extended OpenPOWER community.”
Last week the Open POWER foundation announced that its members had developed its first “white box server,” comprising of a hardware reference design from Tyan and firmware and operating system developed by IBM, Google and Canonical. | 6:00p |
OnApp Cloud Platform Available on IBM SoftLayer Infrastructure OnApp and IBM have partnered to bring the OnApp cloud platform to service providers which enables them to stand up cloud services hosted on IBM’s SoftLayer infrastructure.
OnApp’s platform automates orchestration, server management, provisioning, scaling, failover and billing, among other functions, all designed for infrastructure services, such as private and public cloud, dedicated servers, content delivery network (CDN), storage and DNS backup. The platform is now configured specifically for SoftLayer.
Another key feature is the OnApp Federation, a global network of clouds with 170 points of presence in 133 cities in 43 countries, through which cloud providers can resell excess infrastructure.
“Our partnership with IBM will make it easier than ever for service providers to launch their own clouds,” OnApp CEO Ditlev Bredahl, said. “Soon you’ll be able to get your own pre-configured OnApp cloud deployed on demand in SoftLayer data centers and sell cloud, dedicated servers, CDN and more without worrying about the hardware.
“With instant access to best-in-class infrastructure and the OnApp platform, we’re making it easy for service providers to launch new clouds and expand their footprint with new locations.”
Earlier this week, IBM launched its Cloud Marketplace with more than 100 SaaS applications, providing a single online destination for cloud services and giving service providers access to its own customer base. | 7:35p |
Data Center Jobs: Power Distribution, Inc. At the Data Center Jobs Board, we have a new job listing from Power Distribution, Inc., which is seeking a Field Service Technician in Richmond, Virginia.
The Field Service Technician is responsible for energizing, servicing, and repairing Power Distribution Units, Line Conditioners, Static Automatic Transfer Switches, Branch Circuit Monitoring, and other electrical equipment as required to support PDI’s business, traveling across the US and internationally, performing preventive maintenance visits as scheduled, conducting start-up installations as scheduled, and positively representing PDI during all contact with customers and conducts all actions consistent with PDI values. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. |
|