Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, October 16th, 2015
| Time |
Event |
| 12:00p |
Here’s IBM’s Data Center Strategy for Bluemix PaaS The post-Snowden tightening of data sovereignty regulations around the world, notwithstanding this month’s ruling by an EU court to invalidate Safe Harbor rules for transatlantic data transfers, gave cloud services a physical dimension that perhaps wasn’t as pronounced in the past.
Both customers and service providers now have to give a lot more thought to questions like where customer data and applications are stored, where those applications’ users are, and what set of rules governs data transfer and storage in any particular location.
In delivering its fairly new Platform-as-a-Service offering called Bluemix to developers around the world, IBM took an approach that, in a way, preempted the European Court of Justice’s ruling that Safe Harbor violated European citizens’ privacy rights. Users have full control of where their data resides, and they can choose to store it on either side of the Atlantic, Tim Vanderham, VP of cloud platform services at IBM, said.
The company claims Bluemix is the single biggest deployment of the open source Cloud Foundry PaaS, which originally came out of VMware but was eventually spun out into an independently governed open source project. There are three deployment models for Bluemix: public, dedicated, and on-premise. Dedicated Bluemix servers can be in an IBM SoftLayer data center anywhere around the world, and the on-prem version can be in any data center around the world, including customers’ own or third-party colocation facilities. Both of the two options give the user full control of their data location and configuration of network links that carry it to its destinations.
The most sensitive option is the public PaaS, because it is a multi-tenant cloud infrastructure, where users share servers controlled exclusively by IBM. Bluemix public today runs in data centers in Dallas and London. The company announced this week a partnership with Chinese data center provider 21Vianet to operate the service in China and plans to launch a Sydney location in the near future, Vanderham said.
Even with the public cloud option, however, if you choose an instance in London, you can rest assured it will not exchange data with any location you haven’t requested, he explained.
With a single account, a user can define where their application is deployed and, importantly, where services used by that application will be delivered from. While one of the things that make the PaaS valuable is development environment and the automated infrastructure it runs on, the rest of its value comes from the 100-plus application services available to developers, including among other things IBM’s Big Data analytics and cognitive computing services called Watson.
Your application may be hosted in London, but you may choose the services to be delivered from an IBM data center in Germany, for example. If you need to store your own data accessed by your application that runs on Bluemix in a specific location, you can use either a dedicated instance in a SoftLayer data center, or, if there isn’t a SoftLayer data center in that location, you can deploy a local instance in a data center of your choice.
IBM’s response to the Safe Harbor ruling so far has been similar to the response by other major cloud providers. It posted a notice on its website telling customers they can rely on the alternative set of data transfer rules for EU members – the so-called Model Clauses – to continue to operate legally if they move data between US and Europe. This would be for companies that run their own services on top ofIBM’s cloud infrastructure and need to move data across the Atlantic.
More than 20 IBM services covered by Model Clauses are on IBM’s list, and “clearly, Bluemix is going to go in that direction,” Vanderham said. All the application services available through Bluemix will over time move to the Model Clause model as well, he added.
Overall, IBM prefers to deploy Bluemix and other cloud services in its SoftLayer data centers. That’s not possible everywhere, however. In China, for instance, a foreign company must partner with a Chinese company if you want to set up shop there.
“Laws and rules set forth by the Chinese government require that you … have a cloud service delivery certification, and that has to be done by a Chinese national company,” Vanderham said. “While we continue to investigate the options to bring SoftLayer to China, we chose to partner with 21Vianet because of our previous relationship with them, which we had since 2013 around cloud managed services.”
The Chinese data center company provides IBM’s managed services in China. It has a similar relationship with Microsoft, providing its Azure and Office 365 cloud services out of its data centers in the country. | | 3:00p |
GE Rethinks Data Center Rack Power Distribution General Electric’s data center infrastructure business unit GE Critical Power has designed a power distribution unit that utilizes unused space in traditional data center IT racks.
Enterprise data centers short on rack space may benefit from the solution, but its primary target customers appear to be colocation or hosting providers, for whom an empty rack unit is an additional revenue opportunity. That extra space – up to 10 percent, according to the vendor – can really add up at scale.
“It’s like getting an 11th floor added to a 10-story building for free,” Jim Montgomery, senior product manager at GE Critical Power, said in a statement.
The PDU, installed vertically along one side of a rack, utilizes the five inches of space left when a traditional 24-inch-wide cabinet is filled with 19-inch-wide servers. It frees up horizontal rack space that’s usually occupied by DC power rectifiers by integrating GE’s own compact rectifiers into the vertical PDU itself:

GE is addressing an issue Facebook pointed out about three years ago, when it started prototyping a new design that left rack width at 24 inches but increased server width from 19 inches to 21 inches. The main issue is that the standard dimensions used in data center rack design weren’t created for modern IT equipment. Rather, the design is based on requirements of railroad signal relays from the 1950s, according to Facebook.
GE has a reference rack design that can accommodate its Edge Cabinet PDU (click image for higher resolution):

Each PDU can accommodate five of the company’s GP100 rectifiers with either 12-volt DC or 48-volt DC output. Additional benefit of the design is less electrical cabling between the PDU and server power supplies.
Here’s a higher-res photo of the rack with servers and the PDU installed:
 | | 3:30p |
Friday Funny: Pumpkin Cage In the spirit of Halloween!
Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon, and we challenge our readers to submit the funniest, most clever caption they think will be a fit. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon.
Congratulations to Stanley, whose caption won the “Server Pileup” edition of the contest. His caption was: “The rack team is at a conference but the stack team is still working.”
Some good submissions came in for the “Hole in the Wall” edition – now all we need is a winner. Help us out by submitting your vote below!
Take Our Poll
| | 4:25p |
Weekly DCIM Software News Update: October 16 This week in DCIM software: a new software release, a partnership between DCIM and modeling, and an acquisition.
Sunbird launches new release of DCIM software. Sunbird Software, the DCIM software company Raritan spun off before it was acquired by Legrand, has launched the latest release of its data center management software suite, adding an easy-to-understand dashboard for C-level execs and a host of new and enhanced monitoring, auditing, and analytics capabilities.
DCIM Solutions partners with Future Facilities for predictive modeling. DCIM Solutions announced a strategic partnership with Future Facilities to adapt ACE predictive modeling as a part of DCIM’s Data Center Assessment Services. The Future Facilities ACE (Availability, Capacity and Efficiency) assessment scores data centers on how compromised its availability, physical capacity and cooling efficiency have become by analyzing and mapping the interrelationship between the three variables.
TSL Products acquires AdInfa for DCIM software solution. TSL Products, a UK-based hardware and software provider announced it has acquired AdInfa, a leading developer of DCIM software solution InSite. The move allows TSL to add InSite to its growing portfolio of monitoring solutions for the broadcast sector, while also giving the company a strong entrance into the Data Centre market. | | 5:12p |
Red Hat Buys Open Source Data Center Software Firm Ansible Red Hat, which has made a name for itself selling and supporting enterprise-hardened versions of open source data center software, has acquired Ansible, a startup with a similar business model founded by former Red Hatters.
Ansible deals in IT automation for DevOps, a style of IT infrastructure management aimed at enabling developers to deploy new applications and change existing ones quickly and frequently. Pioneered by web giants like Google, Twitter, and Facebook, the model is gaining a growing foothold in the more traditional enterprise space, where companies like banks and manufacturers are now looking at software development as a way to differentiate and add new sources of revenue.
Red Hat, most famous for its enterprise distribution of Linux, has a number of data center software products with similar aims and says Ansible’s portfolio will be complementary to its own.
Red Hat did not disclose terms of the transaction, and other tech news pubs reported conflicting figures citing anonymous sources. TechCrunch said Ansible was acquired for $150 million, while VentureBeat, which broke the news Thursday, said the price tag was “more than $100 million.”
Ansible was attractive to Red Hat because its platform, called Tower, is simple to use, modular, and popular as an open source project, Alessandro Perilli, general manager of Red Hat’s cloud management strategy, wrote in a blog post.
Ansible makes application deployment easier by automating infrastructure provisioning, orchestration, and management across hybrid clouds. It supports both Amazon Web Services and OpenStack, the open source cloud infrastructure software. It also makes use of Docker application containers.
Here’s what the integration between Red Hat and Ansible will look like from the product perspective:
- Red Hat CloudForms will continue to offer overall orchestration and policy enforcement across all architectural tiers we support, within the corporate boundaries and on public clouds.
- Ansible will automate the provisioning and configuration of infrastructure resources and applications within each architectural tier, as requested through the CloudForms self-service provisioning portal. This will include deploying Red Hat Satellite agents on bare metal machines when the use case requires it.
- Red Hat Satellite will continue to enable the provisioning and configuration of Red Hat systems (and security patches and software updates) within each architectural tier, as defined by the Ansible automation workflows.
| | 5:58p |
OpenStack Liberty Enhances Open Source Cloud Networking, Containers 
This post originally appeared at The Var Guy
Liberty, the newest release of the OpenStack open source cloud operating system, is out this week. It brings a host of new features, as well as a revamp of OpenStack’s governance model.
The full list of new features in OpenStack Liberty is so extensive that it comprises a long list with 17 individual sections, each filled with specific information about driver updates, API changes and so on.
Overall, however, the most significant new features in OpenStack Liberty include:
- Cells, a feature created by Rackspace that lets OpenStack users manage multiple OpenStack clouds as if they were a single cloud. That simplifies maintenance and centralizes administration tasks.
- Magnum, a container orchestration engine. Magnum simplifies the integration of containerized apps into an OpenStack cloud.
- Courier, a new component in OpenStack’s Neutron networking infrastructure that facilitates networking for containers.
- A role-based access control system for cloud networking, which creates granular access control for managing OpenStack networking.
New features aren’t the only big change in OpenStack Liberty. The latest version of the open source platform also introduces what OpenStack developers are calling a “Big Tent” governance model.
OpenStack was previously distributed as an “integrated release,” which meant all of its components were distributed by the OpenStack project itself as a single package. The Big Tent model makes it easier for users to grab only the parts of the cloud operating system that they want.
At the same time, Big Tent distribution helps to decentralize the open source, community-based development of the platform. Developers can now contribute to OpenStack components without having to secure the official approval of the project. As long as they follow OpenStack documentation and license their work properly, their code will be part of OpenStack.
Liberty, which is generally available as of Oct. 15, is the 12th release of OpenStack in the project’s history and the second one this year.
This first ran at http://thevarguy.com/open-source-application-software-companies/101615/openstack-liberty-enhances-open-source-cloud-networking-c |
|