Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, November 25th, 2014
Time |
Event |
1:00p |
You May Not Know it, but You Used a Supercomputer Today NEW ORLEANS - Let’s face it, supercomputers are cool. In a culture that’s impressed by big numbers, they serve as poster children for our technological prowess, crunching data with mind-blowing speed and volume. These awesome machines seem to have little in common with the laptops and tablets we use for our everyday computing.
In reality, say leaders of the high performance computing (HPC) community, supercomputers touch our daily lives in a wide variety of ways. The scientific advances enabled by these machines transform the way Americans receive everything from weather reports to medical testing and pharmaceuticals.
“HPC has never been more important,” said Trish Damkroger, a leading researcher at Lawrence Livermore National Laboratory and the chair of last week’s SC14 conference, which brought together more than 10,000 computing professionals in New Orleans. The conference theme of “HPC Matters” reflects a growing effort to draw connections between the HPC community and the fruits of its labor.
“When we talk at these conferences, we tend to talk to ourselves,” said Wilf Pinfold, director of research and advanced technology development at Intel Federal. “We don’t do a good job communicating the importance of what we do to a broader community. There isn’t anything we use in modern society that isn’t influenced by these machines and what they do.”
Funding Challenges, Growing Competition
Getting that message across is more important than ever, as America’s HPC community faces funding challenges and growing competition from China and Japan, which have each created supercomputers that trumped America’s best to gain the top spot in the Top 500 ranking of the world’s most powerful supercomputers. The latest Top 500 list, released at the SC14 event, is once again topped by China’s Tianhe-2 (Milky Way) supercomputer, followed the Titan machine at the U.S. Department of Energy’s Oak Ridge Laboratory.
The DOE just announced funding to build two powerful new supercomputers that should surpass Tianhe-2 and compete for the Top 500 crown by 2017. The new systems, to be based at Livermore Labs in California and Oak Ridge National Laboratory in Tennessee, will have peak performance in excess of 100 petaflops, and will move data at more than 17 petabytes a second – the equivalent to moving over 100 billion photos on Facebook in a second. The systems will use IBM POWER chips, NVIDIA Tesla GPU accelerators and a high-speed Mellanox InfiniBand networking system.
These projects mark the next phase in the American HPC community’s goal of deploying an “exascale” supercomputer, which can compute at 1,000 petaflops per second, as opposed to the 33 petaflops per second achieved by Tianhe-2. The latest roadmap envisions a prototype of an exascale system by 2022.
Winds of Change in Washington
That type of rapid advance will require significant investment. Nearly three quarters of HPC professionals say it continues to be difficult for HPC sites to obtain funding for compute infrastructure without demonstrating a clear return on investment, according to a new survey from Data Direct Networks.
The largest funder of American supercomputing has been the U.S. government, which is in a transition with implications for HPC funding. Many of the projects showcased last week at SC14 focus on scientific research, including research areas that may clash with the political agenda of the Republican Party, which gained control of both the House and Senate in elections earlier this month.
A case in point: Climate change has been a focal point for HPC-powered data analysis, such as a new climate model unveiled last week by NASA. As noted by ComputerWorld, Sen. Ted Cruz of Texas is rumored to be in line to chair the Senate Subcommittee on Science and Space committee, which oversees NASA and the National Science Foundation. Cruz is a climate change skeptic who does not believe global warming is supported by data. | 4:00p |
Sentient’s Distributed AI to Live in Tata Data Centers Artificial intelligence startup Sentient Technologies has raised a $103.5 million funding round, and one of its new investors, Tata Communications, will also provide data centers and network connectivity services for the startup’s massive-scale distributed AI system it plans to deploy globally.
Sentient’s technology combines sophisticated machine learning technology with large scale. It runs AI jobs distributed across millions of processing nodes.
A company like Tata has the capabilities to provide a geographically distributed data center topology that spans the globe and an extensive global IP network. The Mumbai-based service provider claims that nearly one-quarter of the world’s Internet routes go over its network, including a wholly owned submarine cable network.
Tata owns and operates submarine cables that connect U.K. to Portugal, Spain, and New Jersey; Japan and Guam to Oregon and California; Singapore to India; China to Philippines, Vietnam and Singapore; and several Arabian Peninsula countries to a cable system in the Arabian Sea.
It also has more than 1 million square feet of data center space in 44 markets around the world.
Tata CEO Vinod Kumar said, “The scale of our leading global network infrastructure and data center footprint also complements Sentient’s growth plans and will enable its global deployment.”
US Conglomerate Gets Behind AI Startup
Tata led the funding round together with Access Industries, a U.S. conglomerate with holdings in natural resources and chemicals, media and telecommunications, and real estate.
Jörg Mohaupt, a director at Access Industries, said the company would use Sentient’s technology in several of its businesses. “We are delighted to be investors in Sentient and will apply its technology to our portfolio of e-commerce, media and entertainment businesses so that they can do innovative things and create new products for their customers,” he said in a statement.
Capabilities Demoed Behind Closed Doors
Sentient says it has been demonstrating its capabilities in financial trading and medical research under the radar, choosing those two fields because of the high volume and wide variety of data generated there.
The AI startup’s latest round (Series C) is the largest the company has closed so far. Bringing the total money raised to about $143 million.
Existing investor Horizon Ventures and a group of private individuals also participated. | 4:30p |
Survey: Context is Next Big Data Challenge The value driver for big data is not volume, but velocity, or time to value. In the past year the focus has shifted from simply capturing the data to putting that data in context, according to a recent IBM report. The report also notes a growing disparity between the haves and have-nots when it comes to big data analytics, with three quarters of the population doing little or next to nothing.
A CFO study, also by IBM, backs these claims, revealing that over five years, the CFO now spends 250 percent more time integrating data just to do basic reporting. The ability to store data is beyond many companies’ ability to sift through and identify what they’re trying to solve.
“Last year people were just happy to have white noise,” said Glenn Finch, global leader for technology and data at IBM Global Business Services. “It was about making noise. People don’t really recognize the term ‘context’ yet. They call it ‘relating data’ but context is the single most challenging thing that a data scientist has.”
Finch believes analytics itself is getting easier, however the insights achieved are demanding further forensics. “You don’t know what you don’t know, as the saying goes,” said Finch. “Now you can almost know too much. By knowing more, you want to know more. The amount of transparency that boards and markets are demanding has gone off the chart. That’s why you’re seeing that significant change in the office of the CFO.”
A quarter of those interviewed were identified as “front runners.” They are good at acquiring data, but they are largely challenged in analyzing and acting, according to Finch.
For front runners, time to value has accelerated with companies seeing in-year benefits. “In-year benefits have never happened in all my time doing this,” said Finch, who has been an author on the report for the last six years. “I believe that the speed message will continue,” he said. “We see it happening everywhere. Speed to value is going to be the number one focal area.”
He also notes that the focus of analytics is shifting, from 75 percent customer-focused initiatives to half focused on the customer this year. The data suggests that analytics is turning an eye inward. A 2012 study found initiatives were almost entirely focused on the customer.
IBM’s big data and analytics business is growing. The company staked it as a growth area five years ago. It is the focus for everything from OpenPower servers to cognitive computing with Watson. These findings reinforce and play into their strategy. “I clearly believe that if we haven’t arrived [at a critical point in big data analytics], we’re pretty close. That’s why things like Watson have come into being. We’re generating more data than our minds can process.”
The New Chief Data Officer Role
A sizable number of organizations (46 percent) are re-inventing business processes by integrating digital capabilities. In addition, a new position is emerging, focused on capitalizing on analytics-driven insight: the Chief Data Officer.
The CDO employs data and analytics to drive decisions. The role is rapidly growing, with Gartner predicting that 50 percent of all companies in regulated industries will have a CDO by 2017.
The move to real-time data is shifting companies to “predict and act” mode rather than just “sense and react.” Big data makes big promises, but the complexity of driving insights is changing the makeup of businesses. | 4:30p |
Questions Every IT Manager Should Ask About Thermal Management JP Valiulis, vice president of Marketing at Emerson Network Power, is a sales and marketing leader with a history of growing revenue and profitability at leading B-to-B and B-to-C companies.
Managing data centers to keep servers and their applications functioning properly is a core responsibility of every IT manager, as is keeping costs and energy usage to a minimum. CIOs and IT managers don’t have to be cooling experts, but they should know enough to make intelligent management-level decisions about the most cost effective way to manage energy usage in their data center.
Today’s robust servers allow data centers to operate at hotter temperatures. You don’t need to run iceboxes anymore, nor should you need to wear a sweater inside your data center. A smart approach can reduce the electricity bill for cooling by up to 50 percent at many data centers, with simple but smart adjustments or investments to the cooling system.
The following questions and best practices will help you identify the best thermal management strategy for your data center, with the end goal of slashing energy costs without reducing availability of critical infrastructure.
Turn Up the Thermostat
Are you hot when you walk through your data center? You probably should be a little. The old standard was 72 degrees for return air (the mixture of air returning from computers to the cooling unit) and relative humidity at 50 percent. Today, you can push return air temperatures as high as 95 degrees (best to do this in small increments to avoid unexpected humidity trouble and to ensure all the IT equipment is functioning properly). This can be done over a few days with little risk to applications and IT equipment. Enlist your facilities manager or vendor partners to assess how to do so safely. Remember, for every 1 degree Fahrenheit increase in temperature you will save 1.5-2.0 percent of your energy costs.
Raise chilled water temperatures. For many years, 45 degrees was the standard for water in the chiller. That’s changing. Operating chillers up to 55 degrees is possible today, reducing energy consumption by 20 percent. Every degree matters—each 1 degree increase in water temperature reduces chiller energy consumption by 2 percent. This can make a huge difference, since the chiller is the heart of cooling system and consumes approximately 75 percent of the system’s electricity. Be careful to work with your facilities manager, because raising chilled water set points can reduce cooling capacity in your data center cooling units which is fine if you have some excess capacity.
Put Cold Air in its Place
Do you know where your cold air is going? The physical arrangement of your data center can make a big difference. If you have a raised floor, keep it as uncluttered as possible. Areas beneath the raised floor are often jammed unnecessarily with wires that block cold air from efficiently reaching its target.
Make sure blanking panels are in place wherever there are unused spaces in racks. Plug gaps in floors, walls and ceilings to seal the room. Add return plenums (closed chambers that direct air flow) where appropriate. These tweaks have one goal: make your cooling unit works less, a sure-fire way to cut costs.
In temperature control, you don’t want hot air mixing with cold air in the data center aisle. By placing a pre-fab structure over the area that needs to be cooled – the aisle between two racks – you create a “cool room” within your data center. Warm air, discharged from the back of servers, can’t creep around the front to meet cool air. Thus, cool areas stay cool, warm areas stay warm.
Adjust Cooling Capacity
Do your cooling units have the ability to ramp up and down with changes in your IT loads? Your cooling equipment should have variable capacity components (fans and compressors if applicable) to adjust cooling capacity up and down with your IT load. Constant speed fans are common, but can’t adjust to a data center’s actual performance. A 10HP fan motor uses 8.1kWh of electricity at 100 percent speed, but only 5.9kWh at 90 percent, and 2.8kWh at 70 percent. Savings are significant (and exponential) when fan speed can be matched to the data center’s actual requirements.
Is it Time for an Upgrade?
How old are your unit controls? If they are more than 4 or 5 years old, you may have opportunities to upgrade your equipment controls and save up to 40 percent on your energy bills with an aggressive payback, made even better with energy utility rebates.
New controls – like a thermostat in a zoned house – give users more information and greater control than older versions. Controls can be networked together to prevent units from “fighting” each other – one heating, the other cooling. Small sensors placed strategically throughout a data center make the cooling process work smoothly. These sensors, generally part of a control upgrade, allow data centers to automatically optimize temperatures and airflow in different parts of a room and to isolate potential trouble quickly.
New data center smart technologies pay for themselves quickly through lower energy costs. How much? When new controls and variable capacity component strategies are added to the operational tweaks described previously, power consumption from cooling in a typical enterprise data center with 500kW of IT load, drops over 50 percent from 380kW to 184kW or $171,690 in annual energy savings assuming $0.1 per kW hour. That drops the PUE from 1.76 to 1.37. That’s worth looking into.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 6:30p |
Performance Analysis: Bare-Metal and Virtual Clouds More organizations are off-loading their applications and workloads into the cloud environment because it offers efficiency, agility, recovery capabilities and more. But how can you determine which cloud model is right for you?
Virtualized Infrastructure-as-a-Service (IaaS) has drawn increasing attention and usage due to its granular billing, ease-of-use and broad network access. In comparison, bare-metal cloud services, which are essentially physical servers that can be deployed on demand and billed hourly, offer significant improvements over virtualized IaaS in performance, consistency and cost efficiency for many applications. Benchmark tests comparing similarly sized virtual and bare-metal cloud configurations reveal that bare-metal cloud yields superior CPU, RAM, storage, and internal network performance.
In this whitepaper from InterNAP, Cloud Spectator monitors the CPU, RAM, storage, and internal network performance of over 20 of the world’s most well-known IaaS services to understand important aspects of virtual server performance. Tests are run at least three times per day, 365 days per year to capture variability in addition to performance level. Tests are chosen based on reliability and practicality. The goal is to provide an indication of where certain providers perform well relative to others. This can give consumers an indication of which services would be best for their application(s) by understanding the performance of provider resources most critical to that application.
Remember, singular benchmarks alone should not be the only deciding factor in the provider selection process. A feature set, configuration matches, pricing and ancillary services such as security, compliance, and disaster recovery should always factor into any vendor selection process. However, performance is a very important piece to the puzzle.
Cloud Spectator measured the performance of Internap’s bare-metal cloud offering against virtual offerings from Amazon and Rackspace. The goal was to quantify how much of a performance penalty users are taking by choosing virtual cloud servers.
Over a period of 10 days, Cloud Spectator ran benchmark tests across Internap bare-metal cloud, AWS Elastic Compute Cloud (EC2) with Amazon Elastic Block Storage (EBS), and Rackspace OpenCloud with Rackspace Block Storage. Each test was run to understand the unique performance capabilities of each offering’s CPU, internal network, RAM and disk. Cloud Spectator accounted for performance capability and stability for each provider to understand the value each one delivers to its users. Tests were run on 8GB servers for Internap and Rackspace, and 7.5GB servers for Amazon.
Download this whitepaper today to read the comparison results and how a bare-metal cloud could provide users the opportunity to derive more value for certain IaaS workloads. With the option to use a dedicated server as they would a cloud server, users can potentially better manage workloads that typically require direct access to physical hardware, such as databases and calculation-intensive applications. In addition, organizations that have limited their IT deployments to long-term hosted or owned environments due to performance concerns can look at bare-metal cloud as a way to maintain quality while improving agility and asset efficiency. | 9:30p |
CloudFlare to Open Data Centers in China in 2015: Report 
This article originally appeared at The WHIR
CloudFlare will open 12 data centers in mainland China over the next six months in preparation for the beta launch of local service in January 2015, according to TechCrunch. China is already the company’s second biggest market by traffic and users, despite CloudFlare having no local marketing, sales, or support.
As international companies generally must do to achieve regulatory compliance and operate in China, CloudFlare has partnered with a local company, though it is keeping the identity of the Chinese partner company confidential for now.
The company expects that adding a Chinese presence to its network will not only improve the performance and security of sites in China, but will also allow the company to mitigate attacks originating from China within the Chinese portion of its network. China is the source of “many of the very bad attacks,” CloudFlare CEO and co-founder Matthew Prince told TechCrunch.
The company has also had recent experience with the touchy relationship between IT and politics in China. It battled huge DDoS attacks as a service provider for democracy site PopVote through its Project Galileo during Hong Kong’s protests this past summer.
Despite the various hurdles, CloudFlare was motivated to expand to China by customer feedback.
“One of the biggest requests from our customers is to be fast in China,” said Prince. “We’ve been working on this problem for three years. It has a number of regulatory and technology challenges, but we finally cracked that problem.”
The local partner company will handle censorship of material designated by the Chinese government, so any such content will be available outside of China while CloudFlare remains in compliance locally. The delicate balance necessary is indicated by the mix of censorship and unusual internet freedom experienced by attendees of last week’s World Internet Conference in Wuzhen.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/cloudflare-open-data-centers-china-2015-report | 10:00p |
Cloud Gaming Poised for 2015 Turning Point: Report 
This article originally appeared at The WHIR
Cloud gaming is poised for a turning point in 2015, according to a report released Monday by Strategy Analytics. Huge gains in addressable audience, new entries into the market from big brand names, and improving network performance have set up the coming year as what the Boston market research firm calls cloud gaming’s “inflection point.”
The number of devices with Playstation Now or NVIDIA Grid Game Streaming Service will reach 30 million this year, but will surge 500 percent to 150 million by the end of 2015, says the report, titled “NVIDIA Goes off the Grid with Cloud Gaming Service.” It will be important for gaming services to choose a reliable cloud host to avoid the problems Xbox Live experienced earlier this month due to a Microsoft Azure outage that took services offline and frustrated users.
OnLive achieved the highest profile of any previous cloud gaming service, but filed for bankruptcy in 2012. OnLive cited the cost of scaling its server infrastructure as one of the reasons for its failure, but Square Enix entered the market with cloud gaming platform Project FLARE in November, claiming that its patented architecture will allow it avoid OnLive’s cost issues.
Now OnLive has relaunched. According to VentureBeat both its library of high-end games and its availability are growing.
The report warns that the performance demands and lag-sensitivity of gaming will force service providers to trade performance off against price. However, once the balance is found the report authors see other major players in the gaming industry becoming more competitive.
“2014 is proving to be a watershed moment with major players putting their credibility and brand names on the line to make cloud gaming work,” said Michael Goodman, Director of Digital Media Strategies at Strategic Analytics. “While broadband speeds and consumer acceptance of subscription models have come a long way, access to content remains an issue for all services. The major video game publishers have so far successfully managed an incremental transition from physical to digital media, but cloud gaming offers publishers a new revenue stream.”
Developing technologies in both hosting and gaming are enabling new experiences in multi-player games, as well as providing gaming options for mobile devices beyond their native capabilities. Whether a balance of performance and price attractive enough to gamers can be found remains to be seen.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/cloud-gaming-poised-2015-turning-point-report | 10:10p |
DataBank Expands Downtown Dallas Data Center DataBank has expanded its downtown Dallas data center by 4 megawatts and said it signed with a large unnamed tenant for nearly half of the 22,000 square foot expansion.
The data center occupies former building of the Federal Reserve Bank in Dallas. Opening roughly eight years ago, the company has nearly filled up 130,000 square feet of data center space it has built out there and is now in expansion mode there and in other locations.
It added a sizable facility of 60,000 square feet in nearby Richardson in early 2013. The company recently accelerated expansion in Richardson, deploying a second 10,000 square foot pod.
In a statement, Tim Moore, DataBank’s CEO, said continued demand in the region drove expansion of Richardson and downtown Dallas data centers.
“We specialize in delivering tailored large-scale deployments by layering in custom features, complimentary consulting, and top-notch support,” he said. “This allows DataBank to respond to a client’s unique requirements, rather than a cookie-cutter approach.”
In addition to the Dallas area sites, the company has data centers in Kansas City and Minneapolis.
Acquisition of VeriSpace in 2013 provided its initial footprint in Minnesota. The company is currently building a second, 90,000 square foot facility in the Minneapolis market.
DataBank has two data centers in Kansas City after having acquired a company called Arsalon earlier this year. Arsalon co-founder Bryan Porter was recently named DataBank’s CTO.
The provider moved up the stack to offer managed services in 2013. | 10:15p |
ASUS System Named World’s Most Efficient Supercomputer November’s Green500, latest release of the semiannual list of the world’s most energy-efficient supercomputers, named an ASUS supercomputer at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany, as the most efficient.
The GSI Helmholtz Centre conducts research using heavy ion accelerators. Its L-CSC cluster (a newcomer to the list) achieved 5.27 gigaflops per watt (billions of operations per second per watt) to earn the title.
It was in 168th place on this month’s Top500 list of the most powerful supercomputers. By comparison, China’s MilkyWay-2 supercomputer that took the top spot on the recent Top500 list (its fourth consecutive time) logged 1.901.54 gigaflops per watt.
To accomplish its record 5.27 gigaflops per watt, the L-CSC system combined Intel CPUs, an FDR Infiniband network, and AMD FirePro S9150 GPU accelerators.
According to ASUS documents, the system uses 160 ASUS ESC4000 G2S supercomputer servers, 224 AMD FirePro S9150 dual-GPU modules, an array of 112 Intel Xeon E5-2690 v2 processors, and 896 16GB DDR3-1600 memory modules.
This marks the first time AMD GPUs are part of a system in the top spot on Green500.
The Tsubame-KFC, which held the number-one spot in the last two editions of the list dropped to third position. The system’s gigaflops per watt record has improved slightly.
While Intel Xeon processors dominate the Green500 list, a variety of accelerators are prevalent as well. AMD FirePro GPUs, PEZY-SC many-core accelerators, and NVIDIA K20x GPUs are used in just the top three systems.
Massive power consumption requirements are a key point of any discussion of exascale computing — the high performance computing industry’s next big milestone.
Green500 representatives noted that if “L-CSC’s energy efficiency could be scaled linearly to an exaflop supercomputing system, one that can perform one trillion floating-point operations per second, such a system would consume on the order of 190 megawatts.” |
|