Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, July 30th, 2013
| Time |
Event |
| 12:30p |
June 2013 Exascalar: The Performance Push to High Efficiency Winston Saunders has worked at Intel for nearly two decades and currently leads server and data center efficiency initiatives. Winston is a graduate of UC Berkeley and the University of Washington. You can find him online at “Winston on Energy” on Twitter
 WINSTON SAUNDERS
Intel
With the recent publication of the Green500 and Top500, it’s time to update the Exascalar analysis. You may recall that Exascalar is a way to look simultaneously at efficiency and performance trends of the world’s leading supercomputers.
I chose the theme of this blog based on the Top10 Exascalar systems for June 2013, listed below. There are two new entries, both at the top of the list. The highest ranking Exascalar system is also the performance leader of the Top500, the heterogeneous Chinese Tianhe-2 computer based on Intel Xeon and Xeon Phi which weighs in not only with very high efficiency but on a scale of 17.8 MW. High efficiency systems now are so dominant in high performance supercomputers there is only a slight re-ordering of the top performance systems by Exascalar. Note that the median efficiency scalar equals the best on the list whereas the median performance scalar differs from the best by about 20 percent. This is what is meant by the performance push to high efficiency: while high efficiency may not always translate to high performance, highest performance requires high efficiency.
 Figure 1. Click to enlarge chart.
The Performance Efficiency scalar analysis as shown in Figure 2 below shows all the characteristics of the Exascalar Taxonomy I discussed previously: the now-familiar triangular shape, the (somewhat stretching) power wall, and in the lower right the innovation doorway, where emergent high efficiency systems first come onto the scene (in this case the Green500 leader, CINECA’s Eurora system based on Xeon and NVidia GPUs).
 Figure 2. Click to enlarge graphic.
Those familiar with Exascalar will notice a slight change in the graph. The concentric circles of previous Exascalar analyses are replaced by straight lines. This change results in only very small changes to the Exascalar values (for the leading system less than 0.1 percent), simplifies calculation, improves long term “Exascalability.” I’ll write more about the changes to Exascalar in a subsequent blog.
The blue Top10 Exascalar line is now inside ∈ = -3 for the first time. The green trend line, showing the evolution of the Top Exascalar system since November 2007, reveals the close parametric relationship between supercomputing leadership and Exascalar. While the leading system for June 2013 took a small backward step in efficiency, the overall gain in performance (at the expense of increased power) improved Exascalar.
The fit parameters of the historical Exascalar Trend continue to show an increase of about 0.35 per year (or a factor of about 2.3), consistent with the June 2012 trend. The Top Exascalar fitted line intercepts zero near April 2019. However, without breakthroughs to increase the rate of efficiency increase above historical levels, the trend will become increasingly difficult to maintain due to power costs.
 Figure 3. Click to enlarge graphic.
Which brings us back to our theme, the performance push to high efficiency. Overall, Exascalar continues to reflect the amazing progress being made in supercomputing. The fact that Exascalar reflects mostly performance differentiation of systems proves the indispensability high efficiency in high performance. However, as we can see from the trends, even greater pushed in efficiency are required; without more breakthroughs, the current Exascalar march appears to be at risk as power limitations start imposing themselves
As I mentioned above, I’ll review some of the thinking behind the (minor) changes in the way Exascalar was calculated in my next blog. Until then, let me know your comments, questions, or concerns below.
Winston previously wrote on Exascalar from November 2012, Supercomputing & Efficiency: November 2012 Exascalar, Part II and The Taxonomy of Exascalar in January.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 12:35p |
Global Capacity Expands in 8 Equinix Sites  Some of the cabling density inside the Equinix DC-11 data center in northern Virginia. Global Capacity said today that it is deploying PoPs at 8 Equinix locations. (Photo: Equinix)
One of the trends we’ve seen at Equinix in recent quarters is companies deploying equipment across multiple Equinix colocation facilities. That trend continues today with the announcement that network connectivity provider Global Capacity is expanding its Points of Presence (PoPs) into eight Equinix data centers in North America, connecting to the Equinix Ethernet Exchange.
Global Capacity has signed a Solution Partner agreement with Equinix to deliver its One Marketplace platform to automate and streamline the procurement of Ethernet, SONET and Wavelength network connectivity into Equinix data centers. Global Capacity will offer real-time, competitive quotes for circuits and automated ordering, provisioning and management of access network services for extending the reach of Equinix customers’ TDM and Ethernet services.
“Equinix operates as a critical connection point for the market where applications and networks converge. Global Capacity is excited to be an integral part of the Equinix Marketplace,” adds Ben Edmond , Chief Revenue Officer of Global Capacity. “Global Capacity’s innovative One Marketplace and Equinix’s robust interconnection platforms complement each other, enabling service providers and enterprises customers the opportunity to design, price and execute on the optimal network required, and to simplify and automate these processes. Together, these capabilities will become a significant asset to Global Capacity and Equinix’s mutual customers.”
“Since our founding in 1998, Equinix has optimized network infrastructure. Our dense interconnection platform includes nearly 100 data centers around the world and 4,000 customers. The relationship with Global Capacity will enable us to support our customers’ strategies for cloud and network connectivity,” said Jim Poole, General Manager, Global Networks & Mobility for Equinix. | | 1:03p |
Scoble Weighs in on OpenStack, Amazon and the API Wars The chatter continues regarding the future course of OpenStack. A week after Cloudscaling’s Randy Bias called on the OpenStack community to focus on compatibility with Amazon’s APIs, tech blogger Robert Scoble has responded with an open letter of his own. Scoble works for Rackspace, a primary backer of the OpenStack project, but also spends much of his time interviewing companies that run cloud infrastructure on Amazon Web Services. In addressing the options for OpenStack, Scoble cited conversations with Amazon customers like Mindtouch’s Aaron Fulkerson and Don MacAskill of SmugMug, who are looking for solutions that can bring radical improvements to their fast-growing infrastructures.
“Even with hundreds of companies working together and this investment in code, R&D, and money OpenStack has limited resources,” Scoble writes. “It’s clear we have two philosophies that are conflicting here. One wants those limited resources to be spent on making APIs compatible with Amazon. One wants those limited resources to be spent on making new cloud systems that will have a 10x return for people like Aaron and Don.
“If you believe cloud innovation is slowing down, you should listen to Randy Bias because there will be huge value in providing an API alternative, not an innovation alternative, if that is the case,” writes Scoble. “If you believe, like I do, that we are going to see more change in cloud infrastructure in the next five years than in the past 10, then keep investing in real innovation and keep pushing to bring the contextual age to the world.”
The conversation continues … What’s your take? Share your thoughts in our comments. | | 2:00p |
Best of the Data Center Blogs for July 30 Here’s a roundup of some interesting items we came across this week in our reading of data center industry blogs for July 30th:
A Brief History of Cloud Computing - Having become part of IBM, SoftLayer reflects on the legacy of the mainframe in the evolution of utility computing: “Believe it or not, “cloud computing” concepts date back to the 1950s when large-scale mainframes were made available to schools and corporations. The mainframe’s colossal hardware infrastructure was installed in what could literally be called a “server room” (since the room would generally only be able to hold a single mainframe), and multiple users were able to access the mainframe via “dumb terminals” – stations whose sole function was to facilitate access to the mainframes.”
Call a Plumber…? There’s Water Dripping out of Electrical Conduits - At the Schneider Electroc blog, Barry Rimler looks at seasonal challenges: “With the arrival of the “dog days of summer” occasionally comes an unexpected and potentially dangerous byproduct of the weather. The problem can show up as water dripping out of electrical conduits, right into critical data center “grey space” or “white space” equipment, that is clearly not designed to handle water. This most likely is not a plumbing problem, rather a characteristic of the “changeable states” of water.”
Bessemer Starts Cloud Company Index – From Jordan Novet at GigaOm: “As of this month, major cloud computing companies have a market cap of more than $100 billion, according to calculations from investors at Bessemer Venture Partners. And so, with all that money floating around, the firm has drawn up an index of the top 30 companies offering cloud services.”
Video: Cognitive Computing with the SyNAPSE Project – From InsideHPC: “In this video, IBM’s Dharmendra Modha describes the SyNAPSE Project, a research effort to develop a computer chip inspired by the human brain.” | | 2:30p |
Data Bunkers: Going Underground With Cavern Technologies Bloomberg News has discovered the data bunker phenomenon, characterizing it as a new reaction to securing “the cloud” in response to recent natural disasters. As DCK readers are aware, the trend has been around much longer than that. In this video, Bloomberg’s Richard Falkenrath goes inside a limestone cave 125 feet beneath Lenexa, Kansas to visit with Cavern Technologies, which houses 70 customers in an underground facility. Cavern President John Clune says the facility gains cost benefits from the cool below-ground temperatures, which allow the company to save on cooling costs. This video runs about 2 minutes.
For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube. | | 3:30p |
Data Bunkers: Iron Mountain’s Built-to-Suit Bunkers Earlier we brought you a look inside an underground data center from Cavern Technologies that’s 145 feet underground. Perhaps the best known developer of subterranean mission-critical space is Iron Mountain, which for many years has been storing sensitive customer data in a huge underground facility near Pittsburgh. Iron Mountain recently entered the data center business in a more concrete fashion, unveiling a program to build and lease data centers within the Underground, its 145-acre records storage facility located 220 feet underground in a former limestone mine. In this video, Iron Mountain’s Nicholas Salimbene provides a tour of The Underground and the process of building a custom data bunker for Iron Mountain tenants. This video runs about 2 minutes, 30 seconds.
For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube. | | 3:36p |
Data Center Jobs: DCN Cables At the Data Center Jobs Board, we have a new job listing from DCN Cables, which is seeking a Customer Service Representative in Wake Forest, North Carolina.
The Customer Service Representative is responsible for strengthening customer relations by accurately meeting customer service needs through receiving and placing calls for sales, proposals, and order placement, assisting and supporting throughout the organization as needed, assisting in the implementation of company marketing plans as needed, maintaining accurate records of all sales and prospecting activities including sales calls, meetings, closed sales, and follow-up activities in company CRM system, and meeting with the production and material managers frequently to provide insight on upcoming orders and assisting with scheduling. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 6:15p |
Cologix Expands into New Facility in Vancouver  A row of cabinets inside the Cologix data center in Dallas. The company is expanding its presence in Vancouver. (Photo: Rich Miller)
Colocation specialist Cologix is expanding into a new data center in downtown Vancouver. The company has added a 15,000 square foot expansion at 1050 West Pender Steet, with phase 1 of the new site available in the first quarter of 2014. The expansion is the result of strong demand for network neutral colocation space. The expansion will support more than 350 cabinets at full capacity, roughly quadrupling the company’s capacity in downtown Vancouver.
The expansion to 1050 West Pender builds on Cologix’s existing presence in 555 West Hastings.
“Vancouver is the third largest city in Canada and our experience in the market shows it has been underserved from a data centre perspective for some time,” explains Sean Maskell, President of Cologix Canada. “We field regular requests from our diverse global customer base for establishing a connectivity-centric presence in Vancouver. In addition, Vancouver is home to a thriving set of tech savvy businesses that have not had access to reliable colocation capacity downtown and we look forward to fulfilling that need.”
A recent 451 research report, “Canadian Market Assessment”, identified Cologix as the only data center provider in Vancouver that focuses exclusively on colocation and interconnection. Competitors in the market include Peer 1 Hosting, which has several data centers in the area.
The Cologix story is about connectivity; adding space in carrier hotels in central business districts, assembling a footprint in key network hubs such as Minneapolis, Montreal, Toronto and Vancouver. It’s thriving in Dallas. It opened new space at 151 Front Street last year, right before raising $81.65m in support of its expansion strategy.
“The demand for network neutral colocation continues to grow across Canada while the number of truly network neutral providers is decreasing,” said Grant van Rooyen, President and CEO of Cologix. “We are pleased with the investments we have made in Toronto and Montreal to add incremental capacity and it is natural that we are turning our attention to Vancouver. No other downtown Vancouver provider can offer the access to capacity for growth and network neutral connectivity that we will provide at 1050 West Pender Street.”
Cologix will leverage its new Tier 3 design build standards for concurrent maintainability and will include robust power redundancy and highly efficient, green cooling technologies, which will leverage free cooling enabled by the conducive climate. | | 6:54p |
Data Center Construction: A Developer’s Eye View  Construction is a process that’s fraught with the potential for error and missed deadlines from moving rock with explosives to withstanding typhoon weather, some data center developers have seen it all.
What’s it like to be on the front lines of a major data center construction project? Chris Curtis, co-founder and SVP of Development for Compass Datacenters, takes us on the odyssey of construction with his sometimes tongue-in-cheek posts about the process. Chris describes the complexity of the construction process for data centers, explores the ups and downs (and mud and rain) of constructing data center facilities and highlights the creative problem-solving required for the unexpected issues that sometimes arise with every construction process. We present his entire series as a whole for your reading enjoyment.
Building A Data Center Can Be A Blast: A Little TNT Can Help
The need to blast through rock in site preparations for a new data center can be a challenge. Like most things in the development world, the requirement to conduct “controlled explosions” has some benefits (What guy doesn’t like blowing things up?) and drawbacks.
Data Center or Ark? How Bad Weather Causes Construction Chaos
A data center construction project must have a schedule and that schedule must allow for the fact that during a six-month construction period, you’re probably going to run into some less than optimal weather. This column describes how about running into much more than sub-optimal weather in the data center construction process can lead to cranky tempers and tough negotiations with sub-contractors.
A Race To The Finish: The Final Phase of Construction On A Data Center Facility
This column covers the last phase of a data center development project. A tight schedule tends to add a degree of intensity to things. People are focused and the inter-dependencies of various operations become even more critical. And that’s when the “tour groups” show up. |
|