Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, August 14th, 2015
| Time |
Event |
| 12:00p |
Solar Power, High-Voltage DC to Power Texas Supercomputer Supercomputers are extremely cool and help us tackle some of humanity’s biggest hopes and dreams, from mapping the human genome to finding the Higgs boson, the “God particle.” But they are also extremely expensive to operate, because they consume enormous amounts of energy.
The Texas Advanced Computing Center in Austin is home to not one but a series of energy-guzzling supercomputers. TACC is where University of Texas at Austin keeps its big computing brains researchers use to work on problems like the influenza A virus, which takes millions of lives around the world annually, or malfunctions of the NMDA brain receptor, linked to Parkinson’s, Alzheimer’s, and Schizophrenia.
A recently announced project by TACC together with a Japanese government research and development organization aims to demonstrate that a lot of the energy those supercomputers require can be generated by solar panels. The other part of the project is to demonstrate that they can also use an atypical but reportedly more efficient power distribution scheme, feeding high-voltage DC power directly to servers.
An Uncommon Combination
Solar power, especially on-site solar power, is not a mainstay at data centers today by any means, but there are now numerous sizable deployments around the world. Examples of the biggest ones include Apple’s two 20 MW on-site solar farms at its Maiden, North Carolina, data center, and a 14 MW solar installation powering the QTS data center campus in Princeton, New Jersey.
The biggest challenges in using on-site solar power generation is that they require huge amounts of real estate to generate energy at data center scale and intermittency of solar generation in general. Data centers need a steady supply of electricity, so solar generation in data centers needs to be combined with grid power, large-scale energy storage, or both.
No Boom for High-Voltage DC in Data Centers
But solar power lends itself especially well to use with high-voltage DC power distribution systems, since that’s the kind of current photovoltaic plants generate. A typical low-voltage AC distribution system in a data center receives 480V AC power from a utility feed, converts it to DC to charge UPS batteries, converts it back to AC on the UPS output, and then steps it down to 208V AC in a power distribution unit before pushing it to the server power supply, where it’s converted once again to 12V DC power for consumption by the computer components.
One argument for high-voltage DC power is elimination of all those conversion steps, since each one of them results in energy losses and reduces energy efficiency. Another argument is that a simpler system with fewer conversion points is more reliable because there are fewer components that can fail.
The arguments against using the alternative power distribution method include the fact that most hardware on the market doesn’t come with power supplies that can take high-voltage DC power as well as the increased risk of potentially deadly arc flash that can be caused by bringing high-voltage electricity to the IT racks, where data center technicians work. Yet another argument is that modern AC power distribution systems have become so efficient that whatever efficiency gains DC systems have may be negligible in comparison.
TACC Hopes to Achieve 15 Percent Energy Savings
In high-power-density data centers like the ones that house supercomputers at TACC, the efficiency gains of a DC system can add up to a lot of savings, however, Dan Stanzione, the center’s executive director, says. TACC’s newest supercomputer, Stampede, can require as much as 5MW to operate, although it runs at about 3MW on a normal day.
In addition to Stampede, TACC has three more supercomputing systems, as well as numerous storage and cloud computing clusters. Needless to say, Stanzione’s power bill is huge. “An enormous amount of our cost has to do with data center power,” he says.
This is why TACC has done as much as it could to increase energy efficiency of its data center power and cooling systems, and why the project with Japan’s New Energy and Industrial Technology Development Organization and NTT Facilities is so interesting to the center. If the experimental setup proves to be as effective as expected, TACC stands to save a lot of money by implementing it at a larger scale in the future.
The proof-of-concept project is fairly small, consisting of a 250kW of photovoltaic generation capacity that’s going to be deployed over a university parking lot. It will provide shade for about 60 parking spaces, Stanzione said. In tandem with a utility feed, it will generate electricity for an HPC cluster consisting of about 10,000 CPU cores with a 200kW power requirement.
Besides the potential power-savings benefits for TACC, Stanzione hopes to publish results of the experiment. The plan is to deploy the compute cluster with a traditional AC power distribution scheme first to establish a baseline and then convert to high-voltage DC and compare the two sets of data, he says.
The goal is to achieve 15 percent energy savings, which he admits is ambitious, but, even if the project demonstrates 5 percent energy savings, at TACC’s level of energy consumption that can translate into substantial cost reduction.
NTT Facilities, a data center design, construction, and management company that’s a subsidiary of Japan’s telecommunications giant NTT Communications, will act as the overall power distribution system (in this case 380V DC) integrator and supplier. The company has been putting a lot of effort into expanding its business in the US market, which included acquiring Massachusetts-based data center infrastructure specialist Electronic Environments Corp. last year.
Energy Research on Japanese Government’s Dime
NEDA is footing the bill, which amounts to $13 million, including $4 million in computing equipment. This is not the first project NEDO has invested in in the US. The organization developed an energy management system demonstration project for the electrical grid in Hawaii and a smart-home demonstration project in New Mexico, and pursued net-zero-energy nanotechnology together with the State University of New York.
The data center industry is notoriously, and justifiably, conservative when it comes to adopting new technologies, especially when it gets to critical power. Because their job is to keep servers humming 24-7-365, data center operators generally prefer solutions that have been tried and true, and it is proof-of-concept deployments like the one at TACC that can really help new ways of thinking about data center energy make the transition from thinking to reality. | | 3:00p |
Friday Funny: New Data Center Cabinets Sometimes when you flip to a new spec it can look a little funky for a while….
Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon, and we challenge our readers to submit the funniest, most clever caption they think will be a fit. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon.
Congratulations to Dan and Darrell, whose captions for the “Wind Power” edition of Kip and Gary split the first place in the last contest. Dan’s caption was: “Its a shame that all these windmills only power one rack….” and Darrell’s was: “It’s our anti-drone security defense system.”
Lots of submissions came in for last week’s “Liquid Cooling” edition – now all we need is a winner. Help us out by submitting your vote below!
Take Our Poll
For previous cartoons on DCK, see our Humor Channel. And for more of Diane’s work, visit Kip and Gary’s website!
| | 4:50p |
DataGravity Extends Data Management to Realm of Security DataGravity this week extended the scope of the data management software it embeds in its storage systems to include support for alerts that get generated when sensitive data gets stored on it Discover Series platform.
Looking to carve out a space in a crowded storage field by embedding data governance, search and discovery, and data protection tools to drive the convergence of information and storage management, Version 2 of the DataGravity Discovery Series now extends those management capabilities in the realm of security, Jeff Boehm, VP of marketing for DataGravity, said.
“We are making it easier to both find and define sensitive data,” said Boehm. “Security capabilities now have to be built into everything.”
In general, DataGravity is trying to drive the convergence of data and storage management at the expense of established storage vendors that charge extra for information management software or require IT organizations to license third-party data management software. The startup raised a $50 million Series C funding round late last year, bringing its total funding to $92 million.
Naturally, it’s a lot easier to drive that convergence in a midmarket sector, where IT job roles are not as well defined as they are in larger enterprise IT organizations. As such, the number of people making decisions across information management and data storage technologies in a midmarket IT organization is substantially fewer compared to enterprise IT organizations where information management and storage security are often considered separate domains managed by different teams within the organization.
As part of the effort Discovery Series V2 provides automated email alerts that notify organizations when and where sensitive information is stored, and how it’s being handled. That information can then be correlated against IT polices to enable administrators to identify user access anomalies and potential compliance violations.
Also now included is the ability to audit who accesses what data based on their role in the organization along with tools that allow administrators to define, assign, and schedule compliance policy checks. DataGravity has added support for custom and pre-defined tagging, which lets administrators create company- and domain-specific tags around particular terms or classes of data.
Finally, DataGravity is also making available a plug-in to VMware vRealize Operations and Log Insight management software that is widely employed in VMware environments to manage IT infrastructure.
Aimed primarily at midmarket IT organizations, the Discover Platform is only available via 150 reseller partners that DataGravity has thus far authorized. Deliverable in three models, ranging from 18 to 96TB, the solution’s pricing starts at $45,000.
As one of the first of what DataGravity describes as a “data aware” storage platform, it may only be a matter of time before data security and compliance issues force the convergence of information and storage management everywhere. | | 5:01p |
Weekly DCIM Software News Roundup: August 14 N’Compass issues a new release of its LiveDC user experience for better data center decisions, Device42 integrates its DCIM software with power distribution units from Enlogic, and Romonet signs another global data center provider.
- N’Compass introduces new LiveDC user experience. DCIM software firm N’Compass released version 3.3 of its LiveDC data center decision-making solution. The new release
provides two new user interfaces for facilitating IT and data center decisions. The company has also launched a new collaboration community site and a new Wiki website to centralize LiveDC product implementation and use documentation.
- Device42 integrates with Enlogic PDUs. Device42 announced that it has integrated its DCIM software with power distribution units by Enlogic Systems.
- Romonet signs global data center provider. Romonet announced that it has signed a major deal with another Top 5 data center provider to deliver data driven models of their world-wide locations, and provide analytics that will identify precise cost and energy saving.
| | 5:18p |
Box Hires EMC’s Enterprise File Sync Head to Lead Strategy Just one month after naming former CIO of HP Software Paul Chapman as its new CIO, cloud storage platform Box is welcoming another newcomer, according to our sister site Talkin’ Cloud.
Formerly with EMC, Jeetu Patel has joined Box as senior vice president of platform and chief strategy officer. In his new role, he will be responsible for Box’s platform business and developer relations and report to Box co-founder and CEO Aaron Levie.
Most recently, Patel served as general manager and chief executive of EMC’s enterprise file sync and share solution. Prior to joining EMC, he was president of Doculabs, a research and consulting firm based in Chicago.
These two personnel moves were made in tandem with Box’s goal of expanding its global presence and building an IT infrastructure capable of sustaining growth, reported the Wall Street Journal recently.
The article also stated that Box partnered with IBM in June, which affords Box users the opportunity to store files on the SoftLayer cloud platform. Additionally, Big Blue will deliver Box services on Apple Inc.’s iPones and iPads.
Los Altos, California-based Box is also working with Microsoft and Cisco on various integrations.
Its platform ecosystem has grown to nearly 50,000 developers and its platform serves more than 4.5 billion third-party API calls per month.
The complete article can be found at: http://talkincloud.com/cloud-storage/box-taps-emc-exec-drive-platform-growth?code=um_TWTLK081315 | | 5:45p |
Google Brings Big Data Services Cloud Workflow, Cloud Pub/Sub Out of Preview 
This article originally appeared at The WHIR
Google launched two Big Data services to general availability to attract developers and enterprise customers this week. Google Cloud Dataflow and Cloud Pub/Sub join the company’s pitch to businesses with Big Data processing workloads, which Google said in a blog post include financial fraud detection, genomic analysis, inventory management, click-stream analysis, A/B user interaction testing, and cloud-scale ETL.
Cloud Dataflow was launched to beta in April, and provides a unified programming model to avoid the complexity of developing separate systems for batch and streaming data sources. In addition to fully managed, fault tolerant, highly available, and SLA-backed batch and stream processing, Cloud Dataflow provides a model for balancing correctness, latency, and cost with massive-scale, unordered data, Google said. The company also touts its performance versus Hadoop, extensible SDK, and Native Google Cloud Platform integration for other Google services like Cloud Datastore, BigQuery, and Cloud Pub/Sub.
“Streaming Google Cloud Dataflow perfectly fits requirements of time series analytics platform at Wix.com, in particular, its scalability, low latency data processing and fault-tolerant computing,” said Gregory Bondar, Ph.D., Sr. Director of Data Services Platform at Wix. “Wide range of data collection transformations and grouping operations allow to implement complex stream data processing algorithms.”
Cloud Pub/Sub delivers reliable real-time messaging between different services, Google APIs, and third-party services at up to 1 million message operations per second. Google says that in addition to integrating applications and services, Pub/Sub helps real-time big data stream analysis by replacing traditionally separate queuing, notification, and logging systems with a single API. The service also costs as little as 5 cents per million message operations for sustained usage.
The Google product batch also includes an alpha release of gcloud pubsub, and a beta release of new Identity and Access Management APIs and Permissions Editor in Google Developers Console.
The enterprise cloud big data service market may become more crowded with General Electric announcing last week that it will introduce a cloud data and analytics platform for industry.
This first ran at http://www.thewhir.com/web-hosting-news/google-brings-big-data-services-cloud-workflow-cloud-pubsub-out-of-preview | | 8:07p |
Facebook Takes Over Server Management Software Control from Vendors Facebook has enhanced its open source software that handles certain hardware management functions so that it can support one of numerous Facebook server designs. Until recently, the company was only using the software to manage its own switch hardware.
The software is called OpenBMC, which is Facebook’s open source version of Baseboard Management Controller software used for managing hardware temperature, energy usage and other functions. Facebook engineers found BMC software that came with hardware its vendors supplied to be “too closed” for its purposes, so they developed their own.
Originally developed for Facebook’s data center switches, OpenBMC now also supports the company’s modular System-on-Chip-based server system called Yosemite, which it announced in March and proposed as a contribution to its open source hardware design community called the Open Compute Project. Yosemite is a chassis that accepts four Intel SoCs and allows Facebook to pack up to 192 server nodes into a single rack.
The company’s engineers have added numerous features to OpenBMC in addition to support for Yosemite, Facebook software engineer Sai Dasari wrote in a blog post. But porting it to support the multi-node server took making some major design choices to improve usability and security and to add support for REST APIs and JSON objects instead of using raw bytes to exchange information.
Here’s a diagram of OpenBMC and Yosemite in context, courtesy of Facebook:

A BMC is actually a piece of hardware, also an SoC, with its own CPU, memory, storage, and IO. It takes temperature and adjusts server fan speed accordingly, does remote power control, and error logging for the main server CPU and memory, among other things.
The software that runs the BMC SoC is made by hardware vendors and usually closed, as another Facebook engineer wrote in an earlier blog post. The life of each release of the software would be as long as the life of the hardware. In other words, once a vendor moved on to the next generation of hardware, they also stopped development work on the current version of BMC software.
“When hardware development ended, the BMC software development stopped as well,” Tian Fang, a Facebook software engineer, wrote. “Further bug fixes or new features had to wait for the hardware manufacturer.”
As many things at Facebook do, its own BMC born out of necessity. The company wanted its own top-of-rack switch, and the development process involved lots of specific BMC requirements. This slowed down the development process because its hardware partners couldn’t be responsive enough in changing their BMC software.
After eight months of development, OpenBMC was born and deployed in production with Facebook’s Wedge switches. Facebook open sourced the software in March and said its Six Pack switches would soon be using it too. Its capabilities have now been extended to Facebook server management too. | | 9:48p |
How to Keep Your IT Staff From Leaving? Given the general shortage of professionals with advanced IT skills, retaining IT employees organizations already have has become a critical priority. The challenge is that there is no shortage of companies trying to poach that IT talent.
While employees will respond differently to various incentives, just about everybody is trying to strike the right work-life balance. As such, all financial considerations being roughly equal, the flexibility of the organization becomes a critical factor in retaining IT talent, Jason Crane, a branch manager for the recruitment firm Robert Half Technology, said.
IT managers should not be lulled into thinking that geography plays a role in helping to retain staff. Most IT professionals are fairly mobile, and salary ranges across job functions are fairly close regardless of geography, because when it comes to IT talent, it’s a global market, he said.
IT managers should be as frank as possible in compensation conversations with their employees if they want to retain them.
“As much as possible it needs to be an open dialogue,” Crane said. “Given the demand, there’s a fair amount of greed.”
Crain will detail the level of competition for IT talent that exists and some of the best practices IT managers can employ to retain their existing talent at the Data Center World conference in National Harbor, Maryland, this September.
Of course, how long this situation will be the case is anybody’s guess. Crane noted that just about every organization is investing in automation to get the most of the IT employees they have. One day those investments will affect the balance of supply and demand for IT talent. But for the moment at least, that day still appears to be far in the future.
For more information, sign up for Data Center World National Harbor, which will convene in National Harbor, Maryland, on September 20-23, 2015, and attend Jason’s session titled “Latest IT Hiring and Compensation Trends.”
|
|