Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, March 28th, 2013
| Time |
Event |
| 11:30a |
With Switch Light, Big Switch Looks to Boost Open Source SDN  A diagram of the three-tier network featuring Switch Light, an open source effort that provides thin switching based on the OpenFlow project.(Image: BigSwitch Networks)
Big Switch Networks this week introduced introduced Switch Light, an open source thin switching software platform that can be deployed as a virtual switch for server hypervisors. BigSwitch’s ambitious goal is to accelerate the adoption of OpenFlow-based networking, providing more choices in networking hardware and reducing the cost of operating virtual and physical networks.
As a part of the Big Switch Networks Open SDN Suite, Switch Light will include both open source and commercial solutions for the community and enterprise customers. Initially it will be available to run on a range of merchant silicon-based physical switches (Switch Light for Broadcom) and virtual switches (Switch Light for Linux), and will be ported to other data plane devices in the future. Switch Light will be available for free, with Big Switch Networks offering technical support and commercial services when it is deployed with other products in the its suite.
“In making our open-source thin switching platform available to the market, we aim to accelerate the development of OpenFlow-based switches, both through ODM and OEM partners, thereby catalyzing the deployment of OpenFlow networks,” said Guido Appenzeller, CEO of Big Switch Networks. “Customers are demanding choice in Open SDN hardware and want to unite their physical and virtual platforms. Switch Light is an important step down that path.”
Among those welcoming the arrival of Switch Light was Quanta Computer, the Taiwan-based original design manufacturer that is moving into direct sales of servers based on designs developed by the Open Compute Project. Quanta could benefit from the development of open networking alternatives such as Software Defined Networking (SDN), just as it has from open server and storage designs.
“Quanta has a proven record of success and leadership in deploying servers, storage and top of rack switches to big data centers,” said Mike Yang, general manager and vice president of Quanta QCT. “We see demand for a similar model in the networking market that keeps growing but has traditionally been dominated by proprietary vendors. Partnering with an SDN innovator like Big Switch gives our clients a reliable choice with a complete open SDN stack.”
Switch Light is based on the open source Indigo Project, a sub-project within Project Floodlight, an open source SDN community project that was launched this week.
Project Floodlight
Big Switch also announced that Project Floodlight has grown to be the world’s largest open source SDN community, already encompassing over 200,000 lines of code, 15,000 downloads of the Floodlight controller, and contains contributions from more than 10 ecosystem institutions and vendors around the world. The updated open source community website replaces OpenFlowHub.org. Big Switch Networks open source community initiatives date back to Stanford University and the Clean Slate Project, which was led by Appenzeller, the company’s CEO and co-founder.
At the core of Floodlight is the Indigo Project, which produces open source software that is used by developers and vendor partners to implement the OpenFlow protocol on physical and hypervisor switches. A growing list of innovative users and ecosystem vendor partners around the world are building applications and deployments on open source SDN projects including the Floodlight controller and the Indigo OpenFlow agent. Companies include 6WIND, Canonical, Caltech/CERN, FireMon, Overture, Radware, and SRI International.
“Ubuntu is the most popular platform for OpenStack because of its ability to integrate cutting edge technologies,” said Kyle MacDonald, Canonical VP of Cloud. “Our enterprise and carrier customers are already demanding advanced networking solutions based on open standards, so we are supportive of Big Switch’s commitment to Open SDN and Project Floodlight, as they provide our customers the flexibility needed to build dynamic cloud infrastructure while simplifying network operations.”
Head of Open Source at Big Switch Networks Paul Lappas writes the introductory blog post about Project Floodlight and its transition from OpenFlowHub.org, which launched in January 2012. | | 12:30p |
80 Million Hours of Digital Rendering Produce “The Croods” 
DreamWorks Animation SKG and HP have a long history of pushing the boundaries of hardware and animation innovation with a partnership that delivers computing power to render digital effects for films such as “Rise of the Guardians” and “Kung Fu Panda 2.” DreamWorks’ latest movie, “The Croods,”pushed the boundaries of high-performance workloads, requiring more than 80 million render hours to generate visual images from 3-D models.
HP said its converged infrastructure rose to the challenge, enabling DreamWorks Animation to render the large amount of data required to produce the film. The studio increased render capabilities to an average of 500,000 jobs a day with the use of HP ProLiant BL460c Gen8 server blades.
“Cutting-edge digital manufacturing requires a huge amount of compute power and orchestrated collaboration across our studios,” said Derek Chan, head, Technology Global Operations, DreamWorks Animation (DWA). “HP Converged Infrastructure ensures that our filmmakers have the technical resources they require to bring their creative vision to life and deliver amazing films to our audiences.”
Serving as a foundation for the new movie were HP servers, storage, networking, services, management software and workstations, as well as HP printers and digital rendering. DreamWorks relied on ProLiant servers and HP Enterprise Services in render farms across four geographic locations.
Additionally, HP Z820 workstations with dual Intel Xeon processors helped artists work on large, complex scenes, HP Z800 workstations were used in the new DreamWorks Animation state-of-the-art recording studio, and HP Remote Graphics Software allowed artists across studios in Glendale, Redwood City and Bangalore to collaborate in real time on a single desktop display.
HP hardware, software and services were used from start to finish in the making of “The Croods”, building on a relationship with DreamWorks that began in 2001. Through that relationship DreamWorks takes advantage of HP FlexNetwork architecture, HP Managed Print Services, HP 3PAR StoreServ Storage, HP Remote Intelligent Management Center, and HP DreamColor professional displays and printers. | | 12:44p |
Digital Realty Acquires Property in Toronto Market  Employees of Digital Realty deliver a pre-fabricated electrical room on a skid to a data center site. The company says it will use its POD architecture to develop new space at a facility it has acquired in Ontario. (Photo: Digital Realty Trust).
Citing strong demand and limited supply in the Toronto data center market, Digital Realty Trust has acquired a mixed-use property in Markham, Ontario and will convert part of the site for data center use. Digital Realty, the world’s largest data center developer, said it paid C$8.65 million ($8.5 million US) to acquire 371 Gough Road in Markham, a 120,000 square foot property approximately 17 miles north of Toronto’s central business district.
“We have been tracking a significant amount of demand for enterprise-quality data center space with very limited supply in the Toronto market,” said Michael Foust, chief executive officer of Digital Realty. “The acquisition of this property expands our existing footprint and enables us to support our customers’ data center requirements in the Toronto market.”
The property currently consists of warehouse and some office space that is 48 percent leased to two tenants, according to Scott Peterson, chief acquisitions officer of Digital Realty. “As a data center, the facility is capable of supporting approximately 5.4 megawatts of IT load, or four 1,350 kW Turn-Key Flex suites, utilizing our new POD Architecture 3.0.”
Digital Realty isn’t the only data center provider that’s building in Toronto. Equinix recently began work on a large new data center in downtown Toronto, while Cologix is also adding new space. Both projects are adjacent to 151 Front Street, the city’s primary carrier hotel and connectivity hub.
Digital Realty operates a data center portfolio comprised of 22 million square feet of space across 119 properties in 32 markets throughout Europe, North America, Asia and Australia. The company’s holdings include an existing property in the Toronto market in Mississagua, Ontario. | | 2:30p |
Utility Storage for Virtual and Cloud Computing Today’s IT environment is being built around direct efficiencies. This means better resource utilization, improved monitoring, and the consolidation of enterprise systems. Many organizations are building a business around an efficient and well-controlled IT environment. The idea is to create an IT as a service model where administrators can allow for self-provisioned services and automation to help control administrative overhead.
This is where technologies around the converged infrastructure can really help. Intelligent storage systems can help an organization cut costs and control very vital resources. In HP’s whitepaper, we learn how utility storage creates a unified platform for efficiency and growth. Directly modeled for the needs of virtualization and cloud computing, HP’s converged storage infrastructure leads to three very direct benefits:
- Distributed environment multi-tenancy
- Geographical resource federation
- Cloud, virtualization, and data center efficiency
Furthermore, converged systems add a lot more than advanced functionality. With the idea to simplify the IT environment, using converged storage platform helps with both easy of management and agility. Now, administrators are able to control logical storage segments while still being flexible with the entire organization.
Download HP’s whitepaper on converged storage environments to see how an organization can embark on the path to IT as a service. These new types of platforms create an infrastructure capable of advanced automation and self-provisioned services on demand. Intelligent systems like HP’s utility storage will help reduce administrative costs and have other key benefits as well. By reducing the hardware footprint and deploying advanced platforms like the HP’s converged storage, organizations can actually help extend the life of their data center as well. Consolidated systems operating with more efficient technologies are easier to control, cool and maintain. In fact, according to the whitepaper, companies can extend the life of their data centers by two to five years through a combination of IT strategies. In creating a robust environment, remember to always use intelligent systems which will enhance your ability to be both flexible and agile – while still consistently maintaining infrastructure control. | | 2:30p |
Oracle Launches New SPARC T5 Servers Oracle (ORCL) announced a complete refresh of its midrange and high-end SPARC server lines with new SPARC T5 and M5 servers running Solaris. Building on the SPARC T4 platform, the new servers complete Oracle’s SPARC family, spanning entry-level, mid-range and high-end.
The SPARC M5-32 server is designed for large, complex workloads and features massive I/O and memory capacity. It is 10 times faster than previous generations and offers superior hardware domaining and RAS capabilities. The new servers expand Oracle’s SPARC portfolio and enable near linear scalability from 1 to 32 sockets, with one common core, one operating system, and one common set of systems management and virtualization tools, making them ideal platforms for building clouds.
The new T5 servers have set 17 world records in TPC (Transaction Processing Performance Council) and SPEC tests, according to Oracle. A SPARC T5-8 server equipped with eight 3.6 GHz SPARC T5 processors achieved a world record result of 8,552,523 tpmC for a single system on the TPC-C benchmark. Oracle demonstrated this world record database performance running Oracle Database 11g Release 2 Enterprise Edition with Oracle Partitioning. (Note: IBM is challenging Oracle’s benchmarking claims).
“Oracle has refreshed its SPARC family with the world’s fastest processor and launched the world’s fastest single server for Database, Java and multi-tier applications,” said John Fowler, executive vice president, Systems, Oracle. “The new SPARC T5 and M5 systems leapfrog the competition with up to 10x the performance of the previous generation, offering an unbeatable value for midrange and high-end enterprise computing.”
Oracle also announced two new Oracle Optimized Solutions that exploit the performance, reliability and value of SPARC T5 servers, Oracle storage, Oracle Database and Oracle Middleware. These include Oracle Optimized Solution for Oracle Database, and Oracle Optimized Solution for WebLogic Server.
“SAS Business Analytics enables faster, more accurate data-driven decisions. Implementing SAS Business Analytics on SPARC servers with Oracle Solaris solves critical business issues in transformational ways. SPARC and Oracle Solaris have long been a proven platform for SAS applications,” said Craig Rubendall, Senior Director of Research and Development at SAS. “We’ve seen in-house that the technically advanced features and design of the SPARC M5 servers along with processor and throughput enhancements provide a very well-suited platform for enterprise class SAS application deployments.” | | 2:47p |
Standardizing Data Center Education Can Work Wonders Tom Roberts is President of AFCOM, the leading association supporting the educational and professional development needs of data center professionals around the globe.
 TOM ROBERTS
AFCOM
If you’re struggling to fill a job in the data center, you are not alone. With approximately 4 million IT jobs available just in the United States, to say a shortage of qualified people exists today is an understatement. It has created a worker’s market with only 4 percent unemployment in the technology sector—about half the overall jobless rate.
This shortage exists for any number of reasons:
- Graduates from universities and two-year tech schools entering the workforce are greener than a solar-powered data center and require far too much on-the-job training.
- Those currently employed in IT seem to be staying put because they like what they’re doing and companies are no longer in layoff mode.
- Others have reached or are rapidly approaching retirement and taking their decades of experience with them.
- High-profile companies like Facebook, Google and Apple seem far more “sexy” than traditional corporations and directly compete with attracting the best and brightest from the younger generation.
I believe the best way to conquer all of the above challenges is to standardize the education path for data center professionals. Treat those in the industry just like the architects, engineers, school administrators, mental health professionals and social workers who must adhere to rigorous CEU requirements to move up the ladder or stay qualified.
The source of education is secondary; it can be gained through tech schools, conferences or corporate America. This will help boost standardization with respect to career paths, job descriptions, and skillsets.
Here’s what I would like to see happen. In addition to being president of AFCOM, I’m chairman of Data Center World, a conference and trade show for data center professionals. For the first time we are offering attendance certification for those who attend our educational sessions. Then, in the near future, these records of attendance can be used as CEUs to supplement their current certification(s) obtained from the leading data center education companies (EPI, ICOR, C-Net Training, IDCP, etc.).
As an association with a goal of advancing data center and facilities management professionals, AFCOM’s role is to provide ongoing education like it has for more than 30 years. I think it makes a lot of sense to work hand-in-hand with these companies so that education gained from conferences also count toward specific career goals/paths.
If we can cross-track and document education from all different sources and provide an easy way for data center professionals to access a composite list, it would be a win-win for those recruiting and looking for work.
Right now, inconsistencies are far too common. Two companies may be recruiting for a person to fill the same position, i.e. facilities manager, but the actual responsibilities and needed skills don’t match up. No two IT job descriptions are the same. You may attract a person with a mechanical engineering degree and some who fixes furnaces with the same advertisement. I read that the typical time-to-hire process for an individual IT resource is 55 days. Who has that kind of time?
Change never stops in this industry, and now more than ever, you must keep your data center current or fall behind. The fact that so many companies can’t seem to find the right people with the right skills is a disaster waiting to happen.
Let’s all work together to make sure it doesn’t.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:00p |
SoftLayer Infrastructure Supports 100 Million Gamers  SoftLayer’s Singapore data center (pictured above) provides an international footprint that it key to some of the company’s game developer customers. SoftLayer says its infrastructure now supports more than 100 million gamers. (Photo: SoftLayer)
SoftLayer now supports more than 100 million active game players online worldwide, and has added 60 new gaming companies to its customer list in the last two quarters alone, the company said this week.
What’s SoftLayer’s secret sauce with gaming companies? In a world in which a couple of milliseconds of lag can be the difference between virtual life and death, and the inability to scale up with demand can kill a new game title, there is little room for failure on the part of the hosting provider. In the game hosting arena, success breeds success, allowing SoftLayer to build organic growth atop its track record.
The company has built a solid reputation via gaming conferences and word of mouth. “It’s kind of a tight knit space, and they all talk,” said Marc Jones, VP of Product Innovation at SoftLayer. Notable game developers among SoftLayer’s customers include Hothead Games, Geewa, Grinding Gear Games, Peak Games and Rumble Entertainment.
“Game developers don’t have the time or resources to manage their own complex infrastructure because they need to focus on their core business – developing great games, launching on time and keeping players engaged,” said Jones. “Because we understand the high stakes of their operations, we’ve tailored our cloud platform to meet gaming companies’ demands – from initial game release and explosive, overnight growth, to the performance and availability demands that come with everyday play.”
SoftLayer provides these companies with the infrastructure to test, deploy, manage, play, and grow their games. SoftLayer’s global infrastructure platform spans more than 100,000 servers in 13 data centers across the U.S. Europe and Asia. The company’s ability to provide bare metal (dedicated servers) combined with a low latency network is key to its its appeal.
Bare Metal vs. Public Cloud
“The ability to have hybrid solutions from a bare metal standpoint is perfect for a gaming world with a lot of real time interactions, multi-player and social aspects,” said Jones, “There’s constant communication. A lot of the companies are capturing interaction points to understand how people are engaging, and using this info to help them tailor the experience. Bare metal gives you much better performance than public cloud, and they have the ability to use public to scale when needed during spikes.”
In most of those cases, gaming companies are running a database on the backend, and that a lot of companies are leveraging NoSQL database options in particular, according to SoftLayer. One example of this is Hothead Games, which has released six games that have hit the Top 10 in both the Apple and Google Play app stores, including the BIG WIN Sports series of games and the recently launched Rivals at War. The company runs their back-end database, Cloudant, on the SoftLayer platform, enabling Hothead Games to scale massively and economically, handling billions of database transactions per-day while delivering a superior experience to gamers.
“From a compute standpoint, (bare metal cloud) is definitely what we see as an advantage,” said Jones. “On equal footing is our network. Maintaining a private network that interconnects all our data centers is appealing.” The company has its own private network, which allows it to deliver a predictable, low latency experience.
The company’s international presence is also a selling point. “A lot of our bigger gaming customers have a lot of servers deployed in multiple data centers,” said Jones. “A few customers are active in Amsterdam, Singapore and the US.”
Gaming Trends
By all accounts the online gaming vertical continues to grow at a rapid pace. “We definitely see a lot of online games – Facebook style games, social games and mobile applications,” said Jones. “Those are the ones we’ve seen the most in the last six months to a year. We have hundreds of gaming customers, and the size of those customers is usually pretty substantial. They’ll build out that infrastructure, as they get popular.”
The company can’t disclose its largest customers, but provided examples of mobile, first person shooters (FPS), and MMO (massively multiplayer online) all of which have unique needs.
The Social Gaming Customer: Peak Games
Peak Games is the largest and fastest growing company in Turkey, the Middle East and North Africa. With over 200 employees and 45 million gamers, it is one of the largest global social gaming providers. “The ability to add tens of thousands users overnight is the value of working with SoftLayer,” said Safa Sofuoglu, CTO of Peak Games. “SoftLayer understands the needs of game developers. We can very quickly double our infrastructure requirements when one of our games take off, and easily manage and support new users without compromising on performance while not incurring massive costs. SoftLayer gives us the flexibility to utilize what we need without being locked in.”
THE FPS: Ballistic
Rumble Entertainment has a first person shooter named Ballistic. With an FPS, low latency is of the utmost importance, with players obsessing down to the millisecond. These are games of accuracy and precision. “We’re expanding our first-person shooter game Ballistic into Asian markets, and we wanted to partner with a cloud service provider that could deliver not only raw computing power but also high-quality network service,” said Jim Tso, senior producer for Rumble Entertainment. “SoftLayer’s data center in Singapore and global network footprint help us overcome any network latency issues, giving our users a great online experience.”
THE MMO: Path of Exile
“Path of Exile is unique among online action RPGs (role playing games) because players play on one large international realm,” said Chris Wilson, managing director for Grinding Gear Games. ”SoftLayer’s data centers on multiple continents, and the free bandwidth between them, let us run servers local to the individual players while still allowing them to play with their international friends if they choose to,” said Chris Wilson, managing director for Grinding Gear Games. SoftLayer’s ability to provision new servers quickly allowed us to deal with the immense demand we faced when we launched Path of Exile’s Open Beta. We’re extremely pleased with SoftLayer and the server reliability that it allows us to offer our customers.” | | 3:15p |
In Texas, A Stampede of Petaflops  The Stampede supercomputer is housed in nearly 200 cabinets in a new data center at the Texas Advanced Computing center in Austin. (Photo: TACC)
In its first days of operation, the new Stampede system at the University of Texas at Austin’s Texas Advanced Computing Center (TACC) debuted as the world’s seventh-fastest supercomputer. But there’s plenty more power in the pipeline.
For its first outing on the prestigious Top500 list, Stampede harnessed 6,400 nodes with two Intel Xeon E5 processors each, recording a speed of 2.6 petaflops. The pending addition of 6,880 Xeon Phi coprocessors, which currently in user evaluation mode, would add more than seven additional petaflops of performance to Stampede. With a theoretical peak performance of nearly 10 petaflops (10 quadrillion mathematical calculations per second) Stampede would sit comfortably in the top four supercomputers in the world.
Stampede is a massive Dell/Intel cluster, and is a centerpiece of the National Science Foundation’s investment in an integrated advanced cyberinfrastructure. The system also features NVIDIA GPUs for remote visualization, a Lustre file system, Mellanox InfiniBand networking, 270 terabytes of memory, and 14 petabytes of storage. The data center housing Stampede is 11,000 square feet and consumes an average 3 megawatts of power.
Coprocessors like Intel’s Xeon Phi supplement the performance of the primary processor, and have become a common feature in the fastest supercomputers. Phi is the new brand for products using Intel’s Many Integrated Core (MIC) architecture for highly parallel workloads.
Yesterday Stampede was formally introduced to the public at a dedication ceremony at TACC. The system, which began operating on January 7, has successfully executed more than 450,000 computational jobs to date.
Powering New Scientific Research
The supercomputer has enabled research teams to predict where and when earthquakes may strike, how much sea levels could rise, and how fast brain tumors grow. It allows scientists and engineers to interactively share advanced computational resources, data and expertise to further research across scientific disciplines. Some of the early research examples Stampede has completed includes seismic hazard mapping, ice sheet modeling to study climate change, improving the imaging quality of brain tumors, and carbon dioxide capture and conversion.
“Stampede has been designed to support a large, diverse research community,” said TACC Director Jay Boisseau. “We’re as excited about Stampede’s comprehensive capabilities and its high usability as we are of its tremendous performance. Stampede will lead the way to major advances in all fields of science and engineering. It’s an honor to be at this intersection of advanced computing technologies and world-class science, and we thank NSF, Dell, and Intel for their roles in helping TACC design, deploy, and operate Stampede.”
NSF Directorate for Computer and Information Science and Engineering Farnam Jahanian helped dedicate Stampede, with help from U.S. House Science, Space, and Technology Committee Chairman Lamar Smith and representatives from Dell, Intel, UT Austin and TACC.
“Stampede is an important part of NSF’s portfolio for advanced computing infrastructure enabling cutting-edge foundational research for computational and data-intensive science and engineering,” said Jahanian. ”Society’s ability to address today’s global challenges depends on advancing cyberinfrastructure.”The base Stampede system has been accepted by NSF, and has successfully executed more than 450,000 computational jobs to date. The supercomputer has enabled research teams to predict where and when earthquakes may strike, how much sea levels could rise, and how fast brain tumors grow. It allows scientists and engineers to interactively share advanced computational resources, data and expertise to further research across scientific disciplines. “ | | 3:30p |
An In-Depth Guide for Data Center Transformation The modern data center consists of numerous various vital components all working together to facilitate the delivery of information. Now, more than ever before, the data center has truly become the heart of any organization. Big or small – the current growing reliance on data center environments is evident. During this growth, many administrators began to adopt technologies which directly revolved around efficiency. In some cases it was better cooling systems and power capabilities – in other cases is efficiency revolved around high-density computing platforms.
Now, many data centers are being tasked with new types of technological requirements. This can range from hosting a virtual desktop infrastructure to running a cloud platform. Many organizations are now adopting some type of cloud model. Whether it’s public, private, hybrid or community – businesses are seeing benefits in a cloud computing platform. The bottom line is this: To truly transform your data center you will need a holistic framework.

In Cisco’s comprehensive guide, we are able to see the roadmap for a successful transition so that your organization can identify and achieve business goals. The conversation revolves around Cisco Domain Ten. Domain Ten can be applied to a diverse range of data center projects — from cloud and desktop virtualization to application migration and is equally applicable whether your data center is in enterprise businesses, public sector organizations or service providers.
Download this guide to better understand the data center transformation process and all of the key steps along the way. Cisco’s comprehensive framework allows the administrator to ensure key aspects are considered and where appropriate, action can be taken as you plan, build and manage your data center project. | | 5:00p |
Arista Integrates Natively With OpenStack Arista Networks announced the next phase of its software defined networking (SDN) offerings by integrating its EOS (Extensible Operating System) natively with OpenStack.
“Arista continues to lead the way in data center network innovations,” said Paul Rad, vice president, Rackspace. ”This is the first real API integration of a broad-based data center network platform and seeing it connect with OpenStack and solve real customer provisioning issues is exactly what this industry has needed to scale cloud computing.”
With the latest release of EOS Arista offers an innovative suite of SDN capabilities, and makes the network programmable. The new release includes EOS application programmatic interfaces (eAPIs) for integration with leading orchestration and provisioning tools and customer applications.
EOS features a new modular hardware driver architecture in the Quantum OVS plugin, and an open source version of Arista’s driver. Arista also contributed code to the OpenStack Quantum project that enables unified physical and virtual network device configuration. Other new features include a native OpenStack provisioning capability, OpenFlow 1.0 support for external controllers, and enhanced data plane programmability via direct flow-based OpenFlow extensions.
“Extending Arista EOS for connection to cloud orchestration platforms provides programmability for building agile, self-service cloud architectures. This has been core to Arista EOS development from its inception,” stated Tom Black, vice president, SDN Engineering for Arista. “These software innovations demonstrate Arista’s increasing relevance and agility in addressing SDN for public cloud operators and private clouds.” |
|