Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, January 30th, 2013
| Time |
Event |
| 1:59p |
Microsoft’s Janous to Keynote Spring Data Center World  Microsoft Utility Architect Brian Janous (left) will present the keynote address at the Data Center World Spring 2013 conference being held in in Las Vegas on April 28 to May 2. He will discuss Microsoft’s pilot “Data Plant” project, pictured at right.
Highlighting the primary importance of energy in data center operations, Microsoft’s Brian Janous will deliver the keynote address at Data Center World Spring 2013 conference, which will be held in Las Vegas from April 28 to May 2.
Janous will speak about Microsoft’s recently announced Data Plant pilot project, the first phase of its ambitious plan to place data centers alongside sources of renewable energy, with no connection to the utility power grid. The Data Plant project will convert recycled waste into a data center energy
“The reduction of energy use in data centers is an enormous issue in our industry,” said Tom Roberts, president of AFCOM, the industry group for data center managers that organizes Data Center World. “Microsoft’s investment in this area will be of tremendous interest to Data Center World attendees. Brian will provide valuable insight into the process of leading cloud‐scale infrastructures toward a more profitable and sustainable future through strategic energy sourcing, innovative solutions to supply power to data centers through on‐site generation, and the issues of power availability.”
Janous is responsible for developing Microsoft’s global energy strategy for data centers that provide cloud infrastructure for more than 200 Microsoft online services. He oversees all the company’s energy supply agreements, distributed generation, and strategic partnerships. He came to Microsoft last year after more than 12 years working in the energy industry, consulting with Fortune 500 companies and government agencies on energy supply and sustainability.
Dramatic Gains in Efficiency
“Cloud computing is transforming the way we live and interact with technology, and to support this, our industry is experiencing tremendous growth,” said Janous. “At Microsoft, with projects like our Data Plant, we’re envisioning new scenarios to help us drive dramatic gains in efficiency by looking well beyond traditional models of data center design.”
Janous says that as the data center industry matures, it can learn from other energy-intensive industries and even the energy sector itself. “The biggest challenge is the capacity planning issue,” said Janous. “How do you continue to have the right amount of capacity in the right place at the right time?”
While Microsoft is building cloud infrastructure at scale, Janous says the solutions to its energy challenges will be relevant to data center managers. “We’re really focused on technologies that work, and are scalable and deployable,” he said. “These are technologies that have promise for our industry. My hope is that this will enable broader adoption.”
In his keynote, Janous will also discuss a new study by Global e‐Sustainability Initiative (GeSI) entitled “The Role of ICT in Driving a Sustainable Future,” which examines the potential for the IT industry to reduce annual emissions through the use of video conferencing and smart building management.
Data Center World Spring 2013 will include more than 60 educational sessions covering all aspects of the data center and facilities management field including disaster recovery, DCIM, management, data center builds and design, facilities management, power and cooling, cloud as well as pertinent and topical peer‐to‐peer user sessions and round table discussions. | | 2:00p |
Panzura Sees Gains for its Global Cloud Storage Platform Cloud storage solutions company Panzura announced a banner year for its Global Cloud Storage Platform, certification from Cisco for its UCS E Server, and a successful customer implementation.
Strong adoption in 2012
The global cloud storage solution from Panzura takes a unique approach to data management strategies that places NAS features into its cloud onramp appliances. In 2012 the company reported that revenue increased by nearly 500 percent over the previous year, customers grew by 700 percent, and customers surpassed multiple petabytes of storage using the Panzura Global Cloud Storage System. With typical deal sizes in the multi-millions of dollars Panzura has customers from a variety of industries such as healthcare, media and entertainment, financial services, government and education. DreamWorks Animation SKG uses Panzura to leverage follow-the-sun content production, and as a platform for archiving reusable content in a central repository.
“We are tremendously excited by the enthusiastic reception Panzura solutions have received in the market,” said Randy Chou, CEO of Panzura. “Customers continue to find new ways to leverage our core technology to meet a wide variety of storage needs in a broad array of use cases and industries. Panzura is expanding rapidly to meet growing demand as IT executives perceive Panzura as the only enterprise-class platform for leveraging cloud storage across all leading providers.”
Cisco UCS certification
Panzura announced that its Global Cloud Storage System has been certified by Cisco for its UCS E Server, providing Panzura’s local NAS features plus integration with major cloud storage platforms. Available in both physical and virtual form factors the Panzura Quicksilver Cloud Storage Controller runs as an ESX virtual machine (VM) on Cisco’s UCS E Server. The Cisco-certified Quicksilver VM can be used alone or in concert with physical or virtual Quicksilver controllers used throughout an enterprise’s infrastructure to enable seamless cloud storage, file sharing, data protection, and access to any data from anywhere anytime by any user, subject to permission policies.
“Adding Panzura to the Cisco Developer Network as a VM for the UCS E Server brings enterprise-class cloud-attached storage to Cisco users for the first time,” said Jim Thayer, vice president of channels and business development at Panzura. “No other cloud storage solution supplies Panzura’s scale, features, military-grade security, or performance to Cisco users.”
Panzura selected by Healthcare Realty
Panzura announced that Healthcare Realty, a leading real estate investment trust with more than 30 sites that integrates owning, managing and developing properties associated with the delivery of healthcare services, has selected Panzura’s Global Cloud Storage System and an EMC Atmos private cloud to replace its global Citrix implementation for file distribution and sharing. Looking to improve the quality of file sharing within the company Healthcare Realty selected Panzura, in particular the global file locking capability that prevented file corruption from simultaneous writes. Users have responded by saying they see a “tremendous difference” in performance and access, characterizing the system responsiveness as “amazing.
“We really struggled with Citrix to provide our distributed users over more than 30 offices with rapid access to files we need to run our business,” said Robert Dillard, Associate Vice President of Technology Services at HR. “Panzura allows rapid, secure serving of files to any user regardless of location, seamlessly integrating with the cloud as a very low-cost flexible storage tier that can instantly scale to whatever capacity we need. Users love it.” | | 2:18p |
QTS: Free Blanking Panels for Everyone!  QTS is providing up to 60,000 free blanking panels to customers to control the airflow in their racks, improving energy efficiency. Here’s a look at some of the new blanking panels in QTS Miami facility. (Photo: QTS)
QTS (Quality Technology Services) has purchased more than 60,000 blanking panels that it will install at no charge for all of its customers. The installation follows a successful pilot program in the company’s Miami data center.
Blanking panels are simply pieces of plastic or metal that cover empty slots in the front of a rack to maintain proper airflow. It’s as low tech as you can get, but still important. Any gaps in a server cabinet can change airflow, so a partially-filled rack without blanking panels can lead to inefficient cooling, allowing hot air to recirculate within a rack. That lesson is reinforced in guidelines on data center cooling from ASHRAE TC9.9: “Sections of the rack that are not populated must be fitted with blanking panels to ensure that hot exhaust air cannot be drawn back into any of the IT equipment at the front surface.”
Blanking panels are particularly useful in hot/cold aisle strategies. It’s a simple but elegant solution.
Challenges of Multi-Tenant Environment
Proper use of blanking panels can be a no-brainer in an enterprise data center. But in a multi-tenant environment, these type of energy efficiency enhancements aren’t as common or as obvious as you’d think. Individual customers have less incentive to install blanking panels due to costs and less obvious improvements to their bottom line. QTS’ provision of free tool-less (snap-in style) blanking panels is news because it’s an investment on behalf of its tenants, and represents a big commitment on the part of a multi-tenant data center provider in improving efficiencies.
QTS worked closely with its vendor to develop a tool-less, recyclable steel blanking panel. “We knew some of our customers would prefer to self-install the blanking panels in their environments,” said Brian Johnston, chief technology officer – QTS. ”In working with our vendor, we were able to develop a blanking panel that was not only environmentally friendly, but also fast and easy to install.”
While this may seem like really low-hanging fruit for leading players, it’s clear that it’s a challenge in colo environments where the benefits of efficiency don’t necessary go directly to the customer’s bottom line.By providing panels and installation at no charge to its customers, QTS ensures its facilities are held to green energy standards while improving temperature control.
More Consistent Temperatures
“We’re pleased with the QTS blanking panel program, which not only saves energy in the data center, but has been good for our infrastructure,” said Brad Koester, IT director, Information Technology Infrastructure & Governance at Republic Metals Corporation, a QTS customer. “The on-board sensors in our servers and storage indicate that the temperatures are more consistent with the panels.”
“The blanking panel initiative has enhanced our running dialogue on sustainability with our customers,” said Sukrit Sehgal, director of sustainability for QTS. “We’re looking forward to matching the success of the program in Miami across our entire customer base in our data centers nationwide.”
The addition of blanking panels in QTS data centers nationwide continues the company’s ongoing efforts to improve energy efficiency. In 2012, QTS announced LEED Gold certification of Data Center 1 at its Richmond Data Center and Phase II at the company’s Atlanta Metro facility. Since 2011, QTS has recycled more than six million pounds of materials from its data centers, including copper, aluminum, steel, plastic and concrete, and continues its $10 million multi-year energy efficiency investment. | | 3:18p |
Mega, DreamHost Launch Cloud Storage Services We noticed some interesting stories on The WHIR this week related to developments in the storage and cloud sectors:
What Controversial Storage Service Mega Wants from Service Provider Partners
Early last week, within hours of launch, around half a million people signed up for cloud storage and file sharing site Mega. This story is not just about those consumers of cloud storage, but about providers as well. An interesting element of this story is that a handful of notable European service providers, including DigiWeb, Hosting.co.uk, EuroDNS, and Instra, have started working with Mega — despite last year’s shut down of MegaUpload by U.S. officials — in different investment and reseller capacities.
DreamHost Cloud Storage Service DreamObjects Launches, Undercuts Amazon S3
Hosting provider DreamHost reported that its DreamObjects object storage cloud is now generally available. The differentiator for the service is cost — it is priced cheaper than the base-level per-gig cost for Amazon’s market-leading S3 cloud storage service, at 7 cents per gig. | | 3:45p |
Transitioning To Your Own Data Center This is the second article in a series on Creating Data Center Strategies with Global Scale.
It is very important for your initial plan to include one or more points earlier in the growth curve that is the trigger for the decision to transitioning away from a co-lo or hosting operation and into your own data center. This will avoid having to over expand in a co-lo or hosting operation while your new data center is being built. Once you have seen the rate of growth in the new territory reach a certain point, the earlier you are prepared to move ahead to build or buy your own data center, the more cost effective it will be.

Managing the time line for transitioning to your own data center is going to impact the initial recurring costs and capital costs as well as the time of final delivery. Design and construction of a new data center can typically range from 18-24 months domestically, but in some countries it can stretch to 36 months. One of the benefits of using larger global providers is that they have a pretty well established process and are more likely to quickly complete a project in foreign markets where they have existing operations and support staff.
New Data Center
Upon deciding that you are ready to have your own data center in a foreign location it is important to remember the underpinning of all computer resources still relies on the physical data center, which houses, protects and supports the underlying computing hardware. It still has the primary mandate of security and reliability, however data centers are no longer fixed monolithic inflexible structures. While as a whole they have become larger, they have also become more flexible in order to meet the ever changing demands of IT equipment and computing architectures. Modular expandable designs have become more mainstream, rather than leading edge, proof-of-concept demonstration examples. These newer designs allow for phased growth, ease of expanded provisioning and accommodating moves, adds and changes, rapidly and cost effectively.
Previously most large corporations and financial institutions would almost always build, own and operate their own dedicated data centers. Depending on your own organization’s type and size, performing an honest self-evaluation of core strengths and the level of design-build expertise and experience of internal resources is sometimes a difficult, but necessary task in order to help make the best Build vs Buy decision. For more information on this see part 1 of the Executive Series “Build vs Buy.”
Cross Training of Key Personnel – Differing International Standards
It is imperative that part of the strategy is to cross train key support personnel in the similarities and differences of critical systems in different locations so that they can support and monitor them remotely or even be dispatched locally if necessary.
Within your own support systems try to standardize as much a possible on the type of equipment and systems used. Time zone coordination and overlap for Network Operations Centers should be part of the global support strategy (follow the sun). Remote monitoring is crucial. It allows for a unified common knowledge base for your support staff, both for the physical facility, as well for IT systems. However, note that most other countries use the metric system, which can cause some common items to vary, such as physical dimensions of racks and differing international voltages and in particular, electrical connectors.
For example in the US, standard floor tiles are 24 inches (610mm) wide and rack cabinets are also standardized at 24″ (or 30″) widths. In Europe, the standard is 600 millimeters (or 750mm), so ensuring that the correct size cabinets for the locale are ordered, is a simple, but sometimes overlooked detail. The same is true for equipment power cords and rack power strips, since the plug and receptacles, as well as voltages are different in other parts of the world. Even temperature measurements for environmental monitoring and management systems will be based on the metric system, expressed in degrees Celsius instead of Fahrenheit, some basic training of staff on international conversions is suggested.
The next installment will be on Communications and Network Design Considerations. You can download a complete PDF of Creating Data Center Strategies with Global Scale by clicking here. This series is brought to you courtesy of Digital Realty. | | 4:00p |
GoGrid Launches Dynamic Load-Balancing Service Cloud provider GoGrid announced the availability of its new Dynamic Load Balancer, a cloud-based load-balancing solution that provides a cost-effective way to manage high-availability infrastructure. Customers can scale network services dynamically with this new service, deploying load-balancing services in minutes.
“If you’re deploying applications in the cloud, load balancing is an essential requirement,” said Mark Worsey, GoGrid’s CIO and EVP of technology. “During development, our priority was making sure this service is virtualized and programmable like our other compute and storage services. For the past 10 years, GoGrid has pioneered cloud IaaS services that let global businesses dynamically scale their infrastructure on-demand. With our new Dynamic Load Balancer, we bring our cloud computing expertise to this essential network layer.”
Customers can now proactively manage server load across multiple servers (with the option of balancing traffic to other premises) and also prevent server overload during sudden traffic spikes. Traffic distribution can be based on Weighted Round Robin, Weighted Least Connect, or Source Address Hashing algorithms. The service is fault tolerant out of the box, and users manage all load-balancing functionality programmatically through its API and management console. The load balancer takes advantage of the distributed nature of a cloud environment and doesn’t rely on proprietary hardware. The Dynamic Load Balancer easily integrates with other GoGrid services, providing a flexible and robust infrastructure-as-a-service environment for customers.
“Prior to using GoGrid’s Dynamic Load Balancer, we either struggled to provide a consistent user experience or had to overprovision load-balancing infrastructure to meet demand,” said Shannon Whitley, founder of Whitley Media. “GoGrid’s Dynamic Load Balancer lets us build and manage a true high-availability footprint in the cloud. We manage large volumes of data and now have transparency and control in every aspect of our application.” | | 4:03p |
CiRBA Targets ‘Licensing Sprawl’ in Data Centers The rise of the virtual machine has added a layer of complexity to software licensing, a headache that is made worse in a cloudy world where virtualization decouples virtual machines from physical servers. Data center management software has helped data center operators optimize their use of server capacity. One provider believes it can now help customers save money by optimizing their spending on software licenses.
CiRBA, a provider of capacity management software, has added a new software license control system that delivers optimal virtual machine (VM) placements for processor-based licensing models. The idea is that targeting virtual machine sprawl can reduce “licensing sprawl” as well. Like playing Tetris with an environment, CiRBA moves the blocks (VMs) around so that they’re optimally placed to make the best use of server capacity. It now offers similar optimization for software licenses, with an add-on module that targets capacity-based licensing models.
“Licensing optimization is now becoming a capacity management challenge,” said Andrew Hillier, CTO of CiRBA. “By cleverly placing workloads on licensed servers in such a way that the overall footprint is minimized, license costs can be reduced by 40 to 70 percent. It is a showcase example of how the right analytics can save millions of dollars in unnecessary spend.
Reducing License Purchases and Renewals
Through the Software License Control module, CiRBA optimizes placement of licensed software on machines, which it says has saved customers an average of 55 percent on data center software licensing costs on average. The savings are realized through lower expenditures for renewals, deferral of new software license purchases, and reduced yearly maintenance. Savings can reach into the millions of dollars for expensive operating system, database, and middleware platforms. “Database optimization analysis saves 10x savings (compared to OS) on maintenance alone,” said Hillier.
“In the past, licensing has been more of a bean counting exercise,” said Hillier. “The shift to virtual and cloud has led to a much more dynamic picture. Now we can actively manage these environments, minimizing their footprints.”
Through analytics, CiRBA conducts a “defrag” in which, for example, it can consolidate the Windows components onto the minimum safe footprint. “Within constraints, we’ll minimize the footprint,” said Hillier. “We’re not overdriving those hosts. Too many SQL servers and you’ll blow up the IO, so we limit that, as one example.”
Aligning Licenses with Physical Servers
The CiRBA Software License Control module optimizes VM placements in virtual and cloud infrastructure, reducing the number of processors/hosts requiring licenses. It also determines optimized VM placements to both maximize the density of licensed components on physical hosts and isolate these licensed VMs from those not requiring the licenses. It then contains the licensed VMs on the licensed physical servers.
Since virtual environments are dynamic and always changing, CiRBA also enables organizations to profile software licensing, configuration, policy and utilization requirements as new VMs come into an environment, routing these VMs to appropriately licensed physical servers, and reserving capacity for the new VMs through its Bookings Management System.
This is essential when managing dynamic virtual and cloud environments, and also provides visibility into requirements to grow or modify license pools based on upcoming demand. Through this booking and reservation process, CiRBA ensures that density remains optimized by considering both the bookings and organic growth in the environment, and using this to forecast the impact on capacity and licensing.
CiRBA is a transformation and control system built to optimize virtual and cloud infrastructure, driving up efficiency while driving down costs. It’s been known in the market for its migration capabilities, moving machines from point A to B; physical to virtual, migration to cloud, and data center consolidation. It optimizes density and increases utilization, “kind of like a hotel reservation system for virtual environments,” said Hillier. It’s all policy based.
The service is available on a subscription basis. Here’s a 2-minute video from CiRBA providing an overview:
| | 4:30p |
Data Center Jobs: 451 Research At the Data Center Jobs Board, we have a new job listing from 451 Research, which is seeking a Senior Analyst: Datacenter Technologies in New York, New York.
The Senior Analyst: Datacenter Technologies should have at least 5 years working experience in the area of data centers and/or IT or a closely related field (such as research analyst, management role in a datacenter related capacity in industry, marketing for datacenter supplier), proven ability to write and speak in clear English, an understanding of the infrastructure and engineering of datacenters (cooling, power distribution, availability etc), some knowledge of IT infrastructure (servers, virtual machines, networks etc), and ability to conceive, plan and write reports for publication, and present ideas in presentations and webinars. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 5:30p |
Data Center News: PeakColo, AIS, Phoenix NAP, Verne Global Here’s some of this week’s noteworthy links for the data center industry:
PeakColo expands vCloud with NYI. PeakColo announced the expansion of its VMware vCloud-Powered platform into the New York and New Jersey metro areas through its partnership with New York data center services company NYI. This brings the PeakColo cloud footprint to a total of six geographies, including Chicago, Denver, London, New York, New Jersey, and Seattle. “NYI is a long-standing specialty data center provider, with a very high-touch, personalized approach to serving the greater New York/New Jersey metro areas,” states Luke Norris, CEO and Founder of PeakColo. “With the addition of NYI’s two East Coast data centers to PeakColo’s cloud footprint, our cloud architecture is strengthened, ultimately lowering our cloud’s overall latency, and making PeakColo one of the fastest clouds in the marketplace. With this expanded presence, we can further deliver world-class services coast to coast and allow our channel partners to maximize their profitability with full cloud economies of scale.”
DAR.fm signs with AIS. AIS (American Internet Services) announced that DAR.fm, provider of premier web services that allow users to record, pause, and play radio content, has chosen to host its services in the flagship AIS Lightwave facility in San Diego, California. Cited as highly influential in selecting AIS was that its direct peering relationship with several subscriber networks including AT&T, Comcast, Cox, Sprint, and Verizon provides a very short and extremely reliable network path between end-users and DAR.fm servers. “Our digital audio recorder service, which works a lot like TiVo for radio, enables users to record radio content, such as popular talk shows, and play them back on demand,” said Michael Robertson, founder and CEO of DAR.fm. “As you can imagine, sound quality and total reliability are ‘must-haves’ for a service like ours – which is why we turned to AIS, where connectivity is king.”
Phoenix NAP selected by CCH SureTax. Phoenix NAP announced that CCH SureTax, a leading transaction tax calculation provider, has selected the data center for its hosting needs. Reasons listed for selecting Phoenix NAP were top-level security standards and high level of quality products and services the data center supplies. “We are excited to welcome CCH SureTax to Phoenix NAP,” said Ian McClarty, president of Phoenix NAP. “With the large amount of data CCH SureTax supplies, ensuring the company receives reliable service from a dependable data center is extremely vital and we are glad to be able to assist in this and look forward to establishing a long lasting relationship.” Phoenix NAP also recently announced that Red Fork Hospitality Solutions, a leading mobile, web based service that provides restaurants with the ability to accept mobile orders quickly and easily, has selected the data center for its hosting needs.
Verne Global receives ISO 27001 certification. Iceland data center company Verne Global announced that it has received he International Organization for Standardization (ISO) 27001 certification standard for information security. The ISO 27001 standard recognizes Verne Global’s concentrated effort to protect the confidentiality, integrity and availability of data, as well as maintaining a focus on the security of all vital information assets, all of which are critical for ensuring campus security and continuing customer confidence. Receiving the ISO 27001 certification provides assurance for clients, employees, partners and investors that necessary steps are in place ensuring their critical and confidential data is secure and that pertinent laws and regulations are being observed,” said Tate Cantrell , chief technology officer for Verne Global. “The ability to protect data from attack is critical in the data centre industry. Verne continues to drive efficiency and security measures into the business operations while providing customers with a best-of-breed infrastructure.” | | 8:37p |
Microsoft To Build Two More Data Centers in Virginia  The exterior of the Microsoft data center in Boydton, Virginia. The company said today that it will invest $348 million to build two more facilities at the site. (Photo: Microsoft)
Microsoft Corp. will invest an additional $348 million to expand its modular data center site in southern Virginia, the company said today. The company will build two additional facilities on its data center campus to increase capacity to serve its growing customer base. The expansion boosts Microsoft’s investment in its Virginia data center campus to $997 million.
The expansion is part of an ongoing data center construction program as Microsoft builds future capacity for its battle with Google and other leading players in cloud computing. It has built rapidly at its Virginia facility since it was announced in 2010.
“This expansion will allow us to meet the growing demand from consumers and businesses for our cloud services in the region in an increasingly efficient manner,” said Christian Belady, general manager of Microsoft Data Center Services. ”These facilities showcase state-of-the-art designs developed from our latest technology and infrastructure research that continues to minimize water, energy use, and building costs, while increasing computing capacity, software capabilities, and server utilization.”
Focus on Modular Design
Microsoft’s Boydton facility features the use of a container-based design known as an IT-PAC (short for Pre-Assembled Component). The IT-PAC serves as the foundation of a broader shift to a modular, component-based design that offers cost-cutting opportunities at almost every facet of the project. They are designed to operate in all environments, and employ a free cooling approach in which fresh air is drawn into the enclosure through louvers in the side of the container – which effectively functions as a huge air handler with racks of servers inside.
Microsoft’s original project in 2010 involved an investment of up to $499 million and 50 new jobs. In 2011 the company invested an additional $150 million to expand the site. The latest expansion project will create 30 new jobs.
“In 2010 we were confident that Microsoft’s plans to establish one of its most advanced data centers in Mecklenburg County would be a transformational project,” said Governor Bob McDonnell. “This second expansion within 16 months of the previous one is a great testament to Microsoft’s success and commitment to Virginia. The company continues to grow its cloud operations, representing a total of nearly one billion dollars in capital investment. The Commonwealth is one of the most active data center markets in the country, and Microsoft’s rapid development helps continue to establish us an industry leader.”
The Microsoft expansion will be supported by $2.2 million in public funding, including $2 million in funds from the Virginia Tobacco Indemnification and Community Revitalization Commission and $200,000 from the Governor’s Opportunity Fund.
Microsoft’s data centers are a key component in a major business shift at the company, which is expanding beyond its traditional desktop software business to offer cloud computing services, in which Microsoft’s applications will be hosted in its data centers and delivered over the Internet. | | 10:07p |
Rackspace Accelerates OpenStack Enterprise Push  Rackspace announced OpenStack private cloud capabilities and partnerships with AMD, Brocade, Hortonworks and Arista Networks.
Rackspace Hosting wants to make it easier to deploy and run clouds, and has been partnering with leading hardware and software providers to create three new Private Cloud Open Reference Architectures. Reference architectures and test criteria for OpenStack solutions help to ensure consistent performance, supportability and compatibility.
“The cloud is a paradigm shift that affects IT operations and introduces an entirely new business model; therefore defining Open Reference Architectures is an essential step towards achieving cloud maturity,” wrote Paul Rad, Vice President of Private Cloud at Rackspace, in a blog post. These Reference Architectures help with Enterprise OpenStack adoption.
Along with the new reference architectures, the company has developed the Rackspace Private Cloud Certification Toolkit, which validates all of the functionality of an OpenStack private cloud so your cloud operations team can be sure that your cloud is operational and that it has all of the necessary components properly installed and configured. Some of the first partners certified include AMD, Brocade, Hortonworks and Arista Networks.
Ensuring compatibility and interoperability means that customers using Rackspace Private Cloud Software with OpenStack can more easily implement a reliable and flexible private cloud solution.
The three reference architectures are:
- Mass-compute with external storage: A scalable-compute cloud architecture where data can be stored to external resilient volumes and exported over iSCSI
- Mass-compute: A scalable-compute cloud architecture for variable workloads where data resides on the compute nodes directly.
- Distributed Object Storage: An architecture for an object storage cloud to store critical data across multiple zones for resiliency.
Here’s a look at some of the hardware and software providers included in the first round of certifications:
AMD SeaMicro
AMD SeaMicro SM15000 server is certified for the Rackspace Private Cloud. A product certification for mass compute and object storage ensures that enterprise deployments of Rackspace Private Cloud on AMD’s SeaMicro SM15000 servers are tested and solid for the enterprise.
AMD’s SeaMicro SM15000 system is a very high-density, energy-efficient server. In 10 rack units, it links 512 compute cores, 160 gigabits of I/O networking, up to five petabytes of storage with a 1.28 terabyte high-performance supercompute fabric, called Freedom Fabric. The SM15000 server eliminates top-of-rack switches, terminal servers, hundreds of cables and thousands of unnecessary components for a more efficient and simple operational environment.
The AMD SeaMicro SM15000 server has been certified for the following Rackspace Private Cloud reference architectures:
- OpenStack Compute (“Nova in a Box”) scales horizontally and integrates with legacy systems and third-party technologies
- OpenStack Object Store (“Swift in a Rack”) provides a massively scalable, redundant storage system.
“The AMD SeaMicro SM 15000 system offers Rackspace Private Cloud customers unprecedented density, storage capacity and performance, bringing enterprises one step closer to running the cloud in their own data centers,” said Rackspace’s Rad.
Brocade
The Brocade VDX switch with VCS Fabric technology underwent the validation processor compatibility and interoperability and was given the thumbs up.
“Ethernet fabric adoption has now reached critical mass and our enterprise and service provider customers are reaping the benefits of Brocade VCS Fabric technology as part of their cloud-based architectures,” said Jason Nolet, vice president of Data Center Networking, Brocade. “This certification from Rackspace is validation that the Brocade VDX switch family with VCS Fabric technology is perfectly designed to deliver the automation, reliability and agility expected by Rackspace and their customers.”
A member of the OpenStack community since 2011, Brocade has embraced this open source cloud platform as part of its cloud architecture strategy and is optimizing its networking solutions for OpenStack.
Hortonworks Data Platform
Hortonworks, a leading contributor to Apache Hadoop, today announced that the Hortonworks Data Platform (HDP), an enterprise-ready, 100-percent open source platform powered by Apache Hadoop, has achieved certification for Rackspace Private Cloud.
With HDP, data can be processed from applications that are hosted on Rackspace Private Cloud environments, allowing organizations to quickly and easily obtain additional business insights from this information. The provisioning, monitoring and management components of HDP are important enablers for the integration with the Rackspace Private Cloud, providing an easy path for getting data into and out of the cloud. HDP qualifies for the Rackspace Private Cloud Open Reference Architecture “Mass Compute with External Storage”.
“The Hortonworks Data Platform is emerging as the de facto Apache Hadoop distribution for cloud providers, and the certification for Rackspace Private Cloud is another significant step in the enterprise viability of Hadoop,” said Herb Cunitz, president, Hortonworks. “Our commitment to the 100-percent open source model ensures that cloud providers will avoid any vendor lock-in when deploying HDP and Rackspace Private Cloud, and further extends the Apache Hadoop ecosystem to the private cloud, providing another method for exploring and enriching enterprise data with Hadoop.”
Arista Networks
Arista 7050 Series switches have achieved quality assurance and certification for Rackspace Private Cloud. The Arista 7050 Series enables wire speed, 10 GbE and 40 GbE switching, powered by Arista (Extensible Operating System) EOS for software defined networking applications.
“The combination of Arista 7050 Series switches with Rackspace Private Cloud Software provides enterprise IT professionals with a certified, next generation data center architecture that drives new levels of IT efficiency,” said Ed Chapman, vice president of Business Development, Arista Networks.
The Arista 7050 Series switches provide the low latency, wire speed network performance required in an OpenStack-powered cloud, in form factors of up to 64 ports in a 1 RU chassis. In addition, the Arista 7050 Series provides industry leading power efficiency with typical power consumption of less than 2 watts/port with twinax copper cables, and less than 3 watts/port with SFP/QSFP lasers. |
|