Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, June 4th, 2013
| Time |
Event |
| 11:50a |
Microsoft Adds By-the-Minute Cloud Pricing for Azure  Microsoft’s Brad Anderson arrives on stage at TechEd 2013 in an Aston Martin. (Photo: Microsoft)
Microsoft’s TechEd North America 2013 kicked off this week in New Orleans, where Server and Tools Corporate Vice President Brad Anderson seized the opportunity to showcase a Microsoft customer’s product, by entering his keynote in a sleek new Aston Martin. Anderson unveiled a broad set of new capabilities across the full suite of Microsoft Cloud OS products and technologies. The conference conversation can be followed on Twitter hashtag #msTechEd.
Windows Azure
On the cloud front Microsoft announced that Windows Azure now offers per-minute billing for virtual machines, a move that can improve cloud economics by allowing customers to manage costs on a more granular level than Amazon Web Services, which charges by the hour. Google also offers by-the-minute pricing for its Google Compute Engine, while ProfitBricks and Cloud Sigma have also offered pricing in segments smaller than an hour.
To help developers along the company announced that the new Windows Azure MSDN benefit offers up to $150 per month in credits to use on any Azure service of their hoice for development and testing. The billing approach and economic explanation of benefits was outlined in a company blog post on Windows Azure.
Satya Nadella, President of the Server & Tools Business at Microsoft elaborates on how Microsoft is leading the Cloud Era in his blog post. With half of the Fortune 500 using Windows Azure and over a thousand new customers signing up every day, Nadella feels that the company is taking market share from enterprise incumbents. Microsoft’s unique offering is that its cloud infrastructure is available to customers and partners to build and operate their own clouds. He states that Microsoft is the first multinational company to bring public cloud services to China.
Enterprise IT
Anderson and other Microsoft (MSFT) executives showcased how new offerings across client, datacenter infrastructure, public cloud and application development help deliver a comprehensive enterprise platform. Luxury car manufacturer Aston Martin was one of the featured customers, as an example of using the full range of Microsoft products and cloud platforms for IT success.
“Our staff’s sole purpose is to provide advanced technology that enables Aston Martin to build the most beautiful, iconic sports cars in the world,” said Daniel Roach-Rooke, IT infrastructure manager, Aston Martin. “From corporate desktops and software development to private and public cloud, Microsoft is our IT vendor of choice.”
Microsoft introduced upcoming releases of its key enterprise IT solutions for hybrid cloud: Windows Server 2012 R2, System Center 2012 R2 and SQL Server 2014. Available in preview later this month, the products break down boundaries between customer datacenters, service provider datacenters and Windows Azure. Microsoft is hoping for a faster pace of development and release to market, with its cloud-first strategy. The hybrid cloud advantage looks to incorporate Microsoft’s experience running large-scale cloud services, connect to Windows Azure and work together to provide a consistent platform.
Windows 8.1
A Windows 8.1 update was also announced at the TechEd conference, where new networking features aim to improve mobile productivity. Enhancements include system-on-a-chip (SoC)-integrated mobile broadband, native Miracast wireless display and near field communication (NFC)-based pairing with enterprise printers. Security is also enhanced in the new update to address device proliferation and to protect corporate data and applications with fingerprint-based biometrics, multifactor authentication on tablets and remote business data removal to securely wipe company data from a device.
InCycle Software
Microsoft announced an agreement to acquire InCycle Software’s InRelease Business Unit. InRelease is a leading release management solution for Microsoft .NET and Windows Server applications. This acquisition will extend Microsoft’s offerings in the application lifecycle management and DevOps market. The acquisition of the continuous deployment solution, InRelease, will add Release Management capabilities to Microsoft’s ALM and DevOps solutions, helping customers deliver applications faster, better and more efficiently.
“DevOps is an increasingly important part of ALM and a growing area of interest to chief information officers as businesses are pressured to develop and deploy quality applications at an increasingly faster pace,” said S. Somasegar, corporate vice president, Developer Division for Microsoft. “The InRelease continuous delivery solution will automate the development-to-production release process from Visual Studio Team Foundation Server, helping enable faster and simpler delivery of applications.” | | 12:27p |
How Big is AWS? Netcraft Finds 158,000 Servers 
Just how many servers does Amazon use for its cloud computing operation? Previous estimates have speculated that Amazon Web Services used as many as 450,000 servers to power its cloud infrastructure. Now the UK research firm Netcraft has issued an overview of Amazon’s infrastructure in which it pegs that number as being closer to 158,000 servers.
Amazon doesn’t provide much public information about the scope of its cloud infrastructure, which has led to outside attempts to estimate its scope. In September, Netcraft reported that Amazon Web Services had become the largest hosting company in the world based on the number of web-facing computers. In the last eight months, the company’s tally of web-facing computers has grown by more than a third, reaching 158,000. Netcraft tracks more than 200 million sites in its Internet census, which it has been conducting on a monthly business since mid-1995.
There are several reasons for the discrepancies between some of the estimates. Netcraft’s Hosting Provider Server Count only measures web-facing computers, so these numbers don’t represent the sum total of Amazon’s operations. Many customers use EC2 for batch data-processing, which doesn’t show up for the most part.
The numbers also don’t reveal much about GovCloud. The AWS region launched specifically for secure, government infrastructure isn’t likely to have web-facing servers, with Netcraft counting only 27. Netcraft can’t count private use of S3, the cloud storage service, which also factors into a true count. However, the numbers do reveal a massive operation and provide insight into AWS operations.
Elastic Compute Growing
The two largest EC2 regions are US East (Northern Virginia) and EU West (Ireland), which account for more than three quarters of all EC2 usage as measured by Netcraft. Newest AWS region Sydney accounts for under 1% despite tripling in size in the past four months. The big users of EC2 are Netflix, Instagram (a photo sharing program used by Hipsters that Facebook bought for a sick amount of money) and the search engine DuckDuckGo.
Netcraft is able to survey websites using S3 publicly, and it is seeing continued growth for the cloud storage service. Domains were counted at 48,636, with four-month growth of 16.4 percent. Hostnames were at 138,588, which is growth of 11.4 percent over four months. Netcraft breaks down how S3 is being used in a more granular fashion in the report.
The report also provides growth numbers for PaaS provider Heroku (owned by Salesforce.com), this being made possible thanks to its heavy use of AWS. Heroku announced availability in Ireland in April, so you can see the growth there as well (albeit it remains small, with only 56 IP addresses in Ireland vs 4,915 in its major US East deployment).
The actual size of Amazon’s cloud business, both in terms of infrastructure and revenue, is open to speculation. In terms of revenue, AWS is still lumped into the “Other” category when the company breaks out numbers. That “other” category has grown to represent almost 5% of total revenue, up from 3.2% at this point in 2011. These seem like small percentages, but the company’s primary business is e-commerce, and that’s 5% of a very big number.
The full report from Netcraft can be found at Amazon Web Services’ growth unrelenting. | | 1:59p |
IBM Acquiring SoftLayer to Boost its SmartCloud  Racks of servers inside a data center at cloud infrastructure provider SoftLayer, which is being acquired by IBM.
IBM is acquiring SoftLayer, the world’s largest privately held cloud infrastructure provider. This is huge news, speeding up IBM’s focus on providing cloud services and bringing one of the largest hosting providers under Big Blue. Financial terms were not disclosed.
IBM is also announcing the formation of a new Cloud Services division, which will combine SoftLayer with IBM SmartCloud into a global platform. The new division will provide a broad range of choices to both IBM and SoftLayer clients, ISVs, channel partners and technology partners. SoftLayer’s services will complement the existing IBM portfolio with its focus, simplicity and speed, the companies said. The division will report to Erich Clementi, Senior Vice President, IBM Global Technology Services.
“Our clients are telling us they want to realize the transformative benefits of cloud today – not just for individual applications, but across their entire enterprise,” said Clementi. “SoftLayer is a perfect fit for IBM. It will help us smooth the transition of our global clients to the cloud faster, while enabling IBM to more efficiently offer them its broad portfolio of open IT infrastructure and software services.”
Buying SoftLayer allows IBM to make it easier and faster for clients around the world to incorporate cloud computing by marrying the speed and simplicity of SoftLayer’s public cloud services with the enterprise grade reliability, security and openness of the IBM SmartCloud portfolio.
‘Born on the Cloud’
“SoftLayer has a strong track record with born-on-the-cloud companies, and our move today with IBM will rapidly expand that footprint globally as well as allow us to go deep into the large enterprise market,” said Lance Crosby, CEO of SoftLayer. “The compelling opportunity is connecting IBM’s geographic reach, industry expertise and IBM’s SmartCloud breadth with our innovative technology. Together SoftLayer and IBM expand their reach to new clients – both born-on-the-cloud and born-in-the-enterprise.”
Headquartered in Dallas, Texas, SoftLayer serves approximately 21,000 customers with a global cloud infrastructure platform spanning 13 data centers in the U.S., Asia and Europe. Among its many innovative cloud infrastructure services, SoftLayer allows clients to buy enterprise-class cloud services on dedicated or shared servers, offering clients a choice of where to deploy their applications. These clients will benefit greatly as new enterprise grade functionality from IBM emerges for SoftLayer customers, who will then have a unique opportunity to incorporate it as their business grows.
SoftLayer has a breakthrough capability that provides an easy “on ramp” especially for the Fortune 500 to adopt cloud. And for the SoftLayer born-on-the-cloud customers, IBM opens a new market into the enterprise. Specifically, SoftLayer allows cloud services to be created very quickly on dedicated servers — rather than a virtual one, which is the norm in the public cloud.
This acquisition will:
- Speed IBM’s ongoing focus to provide cloud services for the Fortune 500 which have yet to capitalize on cloud computing.
- Compliment and extend IBM’s existing SmartCloud portfolio, which is already on track to deliver $7B in annual cloud revenue in just 18 months.
- Expand reach to new clients including born-on-the-cloud companies and traditional enterprises.
IBM chose SoftLayer is that it will enable IBM to deliver an industry first: marrying the security, privacy and reliability of private clouds with the economy and speed of a public cloud. For the Fortune 500, this couldn’t come at a better time.
IBM intends to expand SoftLayer cloud offerings to include OpenStack capabilities, consistent with its entire SmartCloud portfolio and historic commitment to open standards such as Linux. Given that most companies will mix public and private cloud services, clouds need to interoperate. In that way, firms can better leverage cloud to run their social, mobile and Big Data applications.
IBM will also support and enrich the SoftLayer cloud-centric partners and ecosystem and its performance capabilities for Big Data and analytics. IBM will provide go-to-market and customizable resources for its expanding cloud ecosystem.
We’ll update with additional details after the companies’ press conference is completed. | | 2:15p |
Hybrid Packet-Optical Circuit Switch Networks are New Data Center Standard Daniel Tardent is the Vice President of Marketing at CALIENT Technologies, Inc.
 DANIEL TARDENT CALIENT
Almost a decade ago, before the ubiquity of smart phones, tablets, cloud computing and video streaming, university and industry researchers were already predicting the need for a hybrid packet-optical circuit switch data center network.
What they anticipated back in 2005 was data centers growing to support tens of thousands of servers and becoming more modular. Only with this hybrid approach, they suggested, could servers be connected to achieve the high levels of performance needed sustainably, in terms of both power and cost.
Now, the day has come when the needs of data centers have caught up with these far-sighted researchers –- driven by the demands of big data, video, virtualization, cloud applications, mobile data and the need to store and replicate vast amounts of data.
These factors generate persistent, dynamically changing traffic patterns, especially in large cloud data centers where a relatively small number of applications can consume vast amounts of server resources. They also result in very large east-west data flows and the need for very low levels of over-subscription. The net result is that traditional packet-based aggregation networks can suffer from bandwidth constriction resulting in increased latency and degraded server and application performance.
The Need for Optical Switches
To solve this using an all-packet switch configuration would mean designing the data center network for worst-case traffic levels; however, this approach causes a large increase in capital cost, and requires more resources to configure. It also potentially adds to the problem by introducing additional latency due to the increased size of the switch fabric.
In the hybrid packet-optical circuit switch network, optical circuit switches (or photonic switches) are installed in a data center to augment packet-based switching to create a hybrid solution. Network management scripts or software-defined networking (SDN) are used to redirect data flows from one network to another. The flow detection and redirection can be based on a number of factors including scheduled events such as virtual machine or data migrations, or real time events such as a network analytic requesting more bandwidth.
Click to enlarge graphic.
Figure 1: Hybrid Packet-OCS Data center Network Architecture
The OCS fabric offers extremely low-latency paths (less than 60ns) between packet-based top-of-rack switches (TORS) providing excellent support for latency-sensitive applications. It also scales without upgrade to support line rates of 10 Gbit/s, 40 Gbit/s, or 100Gbit/s and beyond as the uplinks on TORS are upgraded. Finally, it supports the demands of today’s low over-subscription scenarios far more cost effectively than all packet-based solutions.
The link setup time of an optical circuit switch is typically 25ms. This is dictated by the fact that electrostatic re-positioning of micro-mirrors is required to achieve the switching. In the packet world, 25ms seems quite high. However, in a hybrid architecture like this, such setup times are completely acceptable, because most large flows persist for minutes or more and so the OCS setup time is irrelevant. In the interim period before an OCS connection is made, the packet network will continue to transport the traffic flow and no data is lost.
The Role OF SDN
To complete the solution, the hybrid packet-OCS network equipment needs a means to control it. This can range from simple scripts to a software-defined network (SDN) implementation with high levels of network intelligence.
Figure 2 shows the well-known multi-layer SDN model with application, control, and infrastructure layers.
Click to enlarge graphic.
Figure 2 – SDN Model Layers
The important feature is that the packet and optical circuit switches can coexist together in the infrastructure layer with coordinated control from the upper layers.
How this comes together in large data centers is a management plane and a control plane containing both packet and circuit elements as shown in Figure 3.
Click to enlarge graphic.
Figure 3 – Data center SDN Model Implementation
The management plane creates and manages topologies and related configurations, and analyzes the various flows within the network during run-time in coordination with photonic and routing-switching control planes.
Based upon operational needs (such as run time triggers, scheduled and cyclic patterns, maintenance activities etc.) it creates new topologies and related configurations and propagates the configurations to the respective control planes for asynchronous execution.
The control planes orchestrate the topology related configuration changes: The photonic engine manages the topology changes across the optical circuit switch fabric while the routing-switching engine further manages the topology changes across various packet-based routing and switching fabric elements.
Both engines maintain the status of execution of each step in the configuration flow and communicate the run time statuses and replies for command processing for topology changes with the management plane.
Hybrid Data Center of Today
The hybrid data center network was first envisioned by researchers a decade ago and is now being realized in commercial data centers where huge traffic loads and variable flow patterns are demonstrating the limitations of traditional all-packet based network architectures.
The hybrid packet-OCS network philosophy offers the capability to handle large persistent data flows with unlimited bandwidth, thereby freeing up the packet network and removing bandwidth constriction. It also offers ultra-low latency (<60ns), which is very important to modern latency-sensitive applications. In contrast, lower over-subscription all-packet-based networks can have high levels of latency due to the size of the fabric. And finally, it provides the ability to scale to 100G and beyond.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 2:30p |
Data Center Buyer Behavior Survey Results – 2013 With the expansion of cloud computing, big data and new types of data center distribution models – the data center industry has been booming. More organizations are moving towards cloud computing, application virtualization and high-density computing. All of these new platforms are finding their way into the modern data center to help various organizations grow and expand. As the need around the data center continues to rise – it’s important to know exactly where potential customers are looking to achieve business enhancements.
In February of this 2013, Compass Data Centers conducted a survey of upper level (directors and above) IT executives in enterprise level organizations, defined as those with annual revenues in excess of $250 million, to determine the key considerations and drivers surrounding their data center decision making processes. This white paper outlines their findings and helps keep track of an ever-changing data center industry.
[Image source: Compass Data Centers]
Many providers who have been able to keep up with demand have said that it’s good to be in the data center business right now. Still, it’s important to know what the top-level metrics are when it comes to selecting the right data center provider. Is it green technologies or does the service level play a larger role for the organization? Is the company worried about future expansion or are contractual terms the real deal breaker? By understanding the market, data center business professionals are able to make good decisions around where to take their technologies.
For example, the white paper shows how big data and the cloud are more than hype as over three quarters of respondents plan on adding these applications in their new data centers.
There’s no question that the demand around the data center is going to continue to grow. There will be a lot more users, devices and information to store within the modern data center infrastructure. For those providers seeking to be ahead of the competition, download this white paper today to understand the latest, 2013, buyer trends and statistics. | | 2:35p |
Windstream to Build Charlotte Data Center News from the data center industry includes Windstream expandING again in North Carolina with a 72,000 square-foot facility, Expedient juggles four data center expansion projects, and a second Equinix data center is selected by online game developer, Wargaming.
Windstream to build Charlotte data center. Windstream Hosted Solutions, announced plans to build a new enterprise-class data center in Charlotte, North Carolina. The 72,000-square-foot facility has already started construction, and will be completed in a phased approach, with 10,000 square-foot suites. This will be Windstream’s fourth data center in Charlotte and seventh in North Carolina. It will be equipped to handle Windstream’s full suite of cloud computing, dedicated hardware, data storage and managed services. “Many of our customers have already realized the benefits of integrated, personalized solutions from Windstream Hosted Solutions, and demand for those services has continued to increase significantly year over year,” said Chris Nicolini, senior vice president of data center operations for Windstream Hosted Solutions. “This newest data center represents our commitment to providing customers with the highest level of services and reliability in order to meet the changing needs of their businesses.”
Expedient announces 4 expansion projects. Expedient announced another round of data center footprint expansion, adding capacity in four of their six core markets. The company will add a second data center in Baltimore, add 8,000 square feet to its Pittsburgh facility, complete an infrastructure upgrade in Indianapolis, and begin a phase II expansion in Columbus, Ohio. “We are very fortunate to have an enthusiastic corporate sponsor who understands the significant capital commitment required to meet the ever increasing capacity and reliability demands of our customers. We’ve conducted significant facility expansions, as well as significant investments in hardware and software to support our managed hosting and cloud platforms, in every one of our core markets at some point in the last three years,” stated Shawn McGorry, Expedient President and COO.
Equinix selected by Wargaming in Europe. Equinix (EQIX) announced that Wargaming, an online game developer in the free-to-play Massively Multiplayer Online (MMO) market, is expanding into Equinix’s Campus Kleyer data center in Frankfurt, Germany. Already a tenant in Equinix’s Amsterdam facility, Wargaming will expand into Campus Kleyer to take advantage of two Equinix data centers and benefit from access to an even greater number of network partners along with additional capacity to grow its business. The expansion into Equinix’s Campus Kleyer data center in Frankfurt was the next logical step in our commitment to providing the best possible experience for our players,” said Andre Reitenbach, managing director, Wargaming Interactive GmbH. ”Equinix Amsterdam proved it could deliver a reliable, reduced latency service helping us to drive a better gaming experience and give us an edge against our MMO gaming competitors. Frankfurt is the most important interconnection hub for networks throughput in Europe and locating there provides access to all the major providers and peering partners, allowing us to optimize connectivity to all gaming markets in Western and Eastern Europe.” | | 3:15p |
Google Powering Finnish Server Farm with Swedish Wind Farm 
Google will buy electricity from a new wind farm in Northern Sweden to support a data center in Finland, the company said today. Google and 02, the wind farm developer, jointly announced that Google will be purchasing the entire 10-year electricity output of the new wind farm at Maevaara, in Övertorneå and Pajala municipality in northern Sweden.
German insurance company Allianz has provided 100% of the financing for the project.The wind farm will be fully operational in early 2015. This is the fourth long-term agreement to power data centers with renewable energy worldwide for Google, and its first in Europe.
“As a carbon neutral company, we’re always looking for ways to increase the amount of renewable energy we use,” said Urs Hoelzle, Senior Vice-President of Technical Infrastructure at Google. “This long term agreement, our fourth globally, means we can power our Finnish data center with clean energy – and add new wind generation capacity to the European grid.”
72 Megawatt Capacity
This agreement allows the already carbon neutral Google to run its Hamina, Finland data center using renewable energy. The 24 turbine project will have a capacity of 72MW. All planning approvals, permits, and financing and in line and construction will start in the coming months.
The purchase agreement with Google means O2 has secured 100 percent financing for the construction of the new wind farm from German insurance company Allianz. “Google’s decision to purchase the full output of the Maevaara wind farm for its Finnish data center was a key element in our decision to invest in the project,” said David Jones, Head of Renewable Energy at Allianz Capital Partners. “Maevaara is our first renewable energy investment in Sweden, and the Power Purchase Agreement implemented for this project offers an interesting model for further wind farm development in this market.”
“We’re delighted to be able to build yet another onshore wind farm, this time for Allianz and Google,” said Johan Ihrfelt, CEO of O2. “We’ll be using the latest generation 3MW turbines to ensure we get the maximum efficiency out of Maevaara’s great wind conditions.”
“This arrangement is possible thanks to Scandinavia’s integrated electricity market and grid system, Nord Pool,” the company wrote in a blog. “It enables us to buy the wind farm’s output in Sweden with Guarantee of Origin certification and consume an equivalent amount of power at our data center in Finland. We then ‘retire’ the Guarantee of Origin certificates to show that we’ve actually used the energy.”
The move helps the environment, but also protects Google from future increases in power prices through long term purchasing. The company is investing in new renewable energy projects that will deliver a return for its money. Over $1 billion has been committed to such projects in the U.S., Germany, and last week, in South Africa. | | 3:37p |
AFCOM Symposium AFCOM, an association for data centre and facilities managers, will host its third Symposium at Rydges South Bank, Brisbane, Australia, on Monday 9 September through Wednesday 11, 2013.
The event provides education covering the entire data center from day-to-day operations to facilities-related issues, including the latest technology and management techniques, best practices, disaster recovery, power and cooling, asset management, cloud computing and more.
For more information, visit AFCOM Symposium website.
Venue
Rydges
9 Glenelg Street, South Bank, Brisbane, Qld, 4101
Ph +61 733 640 800
For more events, please return to the Data Center Knowledge Events Calendar. | | 3:48p |
Data Center World Fall 2013 AFCOM’s Data Center World will host its fall event in Orlando at the Orlando World Center Marriott from September 29 through October 2.
Event organizers issued a request for speakers. The due date for proposal submissions is June 21. See this page for more information on submitting a speaker proposal.
The event offers its speakers opportunities for exposure and recognition as an industry leader. The sessions will attract many technical professionals interested in learning from your examples, expertise and experience.
AFCOM invites submissions from all those involved in data center and facilities management operations and support, including vendors, to share their knowledge at the upcoming Data Center World conference. Vendors who would like to speak are given greater consideration if they are participating exhibitors in this show. (Note: Any company that sells supplies, services or equipment to the data center is considered to be a vendor.)
For more information on the show and registration, visit Fall Data Center World website. Please note the early bird rate for registration is $1,295 and that rate expires July 26.
Venue
Orlando World Center Marriott
8701 World Center Dr, Orlando, FL 32821
Phone:(407) 239-4200
For hotel, Data Center World reservation code is AFCATT12
Reservations: +1-800-228-9290 or +1-800-621-0638
AFCOM DCW Hotel Link: https://resweb.passkey.com/go/AFCOM
For more events, please return to the Data Center Knowledge Events Calendar. | | 4:22p |
Velocity Conference 2013 When your job is to keep it all running: the Web, the Cloud, mobile apps, data flow and storage, and all the technologies that hold it all together, you will probably at Velocity Conference 2013. The best minds in web operations and performance come to Velocity each year.
This year the event is in Santa Clara from June 18 through June 20.
The conference that focuses on the core aspects of building a faster and stronger web. At Velocity, you can hear from your peers, exchange ideas with experts, and share what has worked (and equally importantly, what has not worked) in real-world applications.
For more information and registration, visit Velocity 2013.
Venue
Santa Clara Convention Center
5001 Great America Parkway
Santa Clara, CA 95054
Hotel
The Hyatt Regency Santa Clara
5101 Great America Parkway
Santa Clara, CA 95054 (map)
Phone: (408) 200-1234
Fax: (408) 980-3990
(Hotel is connected to the Santa Clara Convention Center)
For more events, please return to the Data Center Knowledge Events Calendar. | | 5:17p |
IU Dedicates Big Red II – Its New Supercomputer Indiana University (IU) recently dedicated a new supercomputer called Big Red II. The fastest university-owned supercomputer in the United States, capable of performing one quadrillion floating-point operations per second (1 petaflop), Big Red II uses Cray XE/XK technology, has 676 XK nodes (each containing one AMD “Interlagos” processor and one NVIDIA “Kepler” GPU), and has 344 XE nodes (each containing two AMD “Abu Dhabi” processors). In this 1:26 minute video, WTIU’s Noelle Visser reports on IU’s new supercomputer and the benefits it will bring to the university.
For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube.
| | 5:43p |
Second Annual Greater Chicago Data Center Summit & Expo The National Data Center Summit Series, with regional events in major markets, continues with The Second Annual Greater Chicago Data Center Summit & Expo on June 27 in Chicago.
The event includes the most active and innovative data center real estate investors, developers, capital sources, technology firms and end-users.
The day’s featured panel is titled, “Energy Efficiency & Government Regulation: Analysis of Measures that Could Impact the High-Voltage Data Center Real Estate Industry.”
For more information and registration, visit CapRate events page.
Venue
Museum of Science and Industry
5700 S Lake Shore Dr Chicago, IL 60637
(773) 684-1414
For more events, please return to the Data Center Knowledge Events Calendar. | | 7:14p |
Data Center Efficiency Summit The Data Center Efficiency Summit will be presented on Tuesday, October 29 at HP in Palo Alto, California.
The Data Center Efficiency Summit is a signature event of the Silicon Valley Leadership Group in partnership with the California Energy Commission and the Lawrence Berkeley National Laboratory, which brings together engineers and thought leaders for one full day to discuss best practices, cutting edge new technologies, and lessons learned by real end users.
Case studies are welcome: http://svlg.org/policy-areas/energy/data-center-efficiency-summit/2013-call-for-case-studies.
Venue
HP
3000 Hanover Street
Palo Alto, California
For more events, please return to the Data Center Knowledge Events Calendar. | | 8:08p |
Technology Convergence Conference The Technology Convergence Conference will be held on February 18, 2014, n Santa Clara, CA.
This one-day educational conference brings together IT, Facilities, and Data Center professionals and executives to learn from each other in a collaborative setting.
The Technology Convergence Conference is looking for case studies for the upcoming conference, with a submission deadline of August 30.
Venue
Santa Clara Convention Center/Mission City Ballroom
5001 Great America Parkway
Santa Clara, CA 95054
For more events, please return to the Data Center Knowledge Events Calendar. |
|