Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, November 16th, 2015
| Time |
Event |
| 1:00p |
Facebook’s DCIM Vendor CA Quits DCIM Software Market CA Technologies has decided to get out of the data center infrastructure management market, where it was considered one of the leaders.
The New York-based IT infrastructure management software giant will no longer sell its stand-alone DCIM software solution, called CA DCIM, which has been deployed in data centers operated by Facebook and NTT-owned RagingWire, among others.
A relatively new category of products, DCIM software solutions have been met with a lot of skepticism in the industry. Although vendors and analysts say data center professionals generally understand DCIM better today than they did even one year ago, that skepticism, together with complexity and, in most cases, high cost of deploying DCIM, still prevent many companies from adopting the tools.
According to a forecast by Jennifer Koppy, research director at IDC who tracks the DCIM market, the market will reach $576 million in revenue this year – up from $475 million in 2014. It will get close to $1 billion in 2019, growing at a compound annual rate of 16 percent.
CA DCIM, which was built around the company’s previously existing energy monitoring product called ecoMeter, was one of four leading products in market research firm Gartner’s Magic Quadrant for DCIM last year, along with DCIM software by Schneider Electric, Emerson Network Power, and Nlyte Software. Gartner dropped CA from this year’s quadrant, published in October, leaving Schneider, Emerson, and Nlyte as the three remaining leaders.
Emerson, according to IDC, has the largest DCIM market share, followed by Schneider.
Instead of selling a stand-alone DCIM product, CA will focus on end-to-end IT infrastructure monitoring, including things like network, middleware, databases, and even applications, Fred Weiller, senior director of product marketing for CA’s Infrastructure Management Portfolio, said in an interview.
This will include data center power and cooling monitoring capabilities, the capabilities of ecoMeter, the genesis of CA DCIM. But it will not include other DCIM software components, such as visualization of the physical data center layout and asset management, he said.
The product did not end up on the chopping block because there was low demand, Weiller explained. It was more about aligning the company’s products with its strategic direction, which is about expanding reach across enterprise infrastructure horizontally.
“DCIM is what I would call a vertically integrated solution that has the asset component, the workflow component, and then the monitoring component,” Weiller said. “We decided that continuing to develop a more vertically integrated DCIM solution was not the best use of our resources in terms of providing new capabilities to our customers.”
CA will continue support for exiting CA DCIM customers, including services, maintenance releases, as well as capacity expansion. “We’re going to take great care of customers that have deployed that solution,” he said. | | 4:00p |
Amazon Plans Virginia Data Center to Serve Federal Clients Amazon subsidiary Vadata has successfully negotiated a $2.7 million tax-break deal with local officials for a data center the company plans to build at the Warrenton Training Center in Virginia, a classified US federal government communications complex that serves the likes of CIA, NSA, and the Department of Defense. Local news outlet Fauquier Now reported on the tax deal.
Vadata does data center projects on the online retail and cloud infrastructure services giant’s behalf. It’s common for web-scale data center operators like Amazon to use subsidiaries to build data centers for them in attempts to obscure their connection to the projects, which sometimes represent hundreds of millions of dollars in investment.
An Amazon spokesperson confirmed to us earlier that Vadata was a wholly owned Amazon subsidiary.
The US federal government is a major customer of cloud services, and its use of these services will only grow as agencies continue to outsource more and more of their IT infrastructure needs in efforts to cut cost.
Security requirements for many government agencies, however, dictate that they do not host data and applications in data centers that also host private-sector customers. In response to these requirements, cloud service providers have built data centers just for government customers.
Amazon Web Services has a data center on the West Coast of the US that hosts its GovCloud availability region. The facility is designed specifically for government customers, meeting various regulatory and compliance requirements.
Microsoft launched the government version of its Azure cloud services last year. Azure Government is physically and virtually separated from non-government customers. Its infrastructure in multiple data centers is operated by personnel that has gone through specialized screening.
IBM serves government cloud customers out of data centers in Dallas and Ashburn, Virginia.
Vadata, which has operated at the Warrenton Training Center in Virginia for two years, has already invested $26.4 million in the site, Fauquier Now reported, citing the company’s tax application. The upcoming project will be much bigger, with the company expecting to spend up to $200 million on the data center over time.
A local economic development official told Fauquier Now that as far as he knew, the federal government would be the data center’s only customer. Fauquier County expects the project to generate about $1.5 million in tax revenue annually. | | 4:30p |
Guaranteed Storage Performance Requires a New Approach to QoS Derek Leslie is Senior Product Manager at SolidFire.
It is essential for cloud providers or private cloud operators to secure storage performance under the most diverse conditions, including error scenarios, peak loads, variable workloads, or increasing capacity requirements. However, new QoS approaches to storage are necessary.
Whether in public or private clouds, companies require constant performance and system availability, even in the face of increasing amounts of data and increasingly complex workloads. This is only possible with a storage QoS that is an integral component of the system design.
Storage infrastructure that is ready for the future should include at least the following four main components:
- Continuous SSD architecture
- Genuine scale-out architecture
- RAID-less data protection
- Balanced load distribution
Continuous SSD Architecture Ensures Consistent Latency
The basic requirement for successful QoS implementation is to replace hard-drive-based storage systems with all-SSD or all-flash architectures. This is the only way to ensure consistent latency under any I/O load. This is because a spinning drive can only perform one I/O action at a time and every search request creates latency. In cloud environments where several applications or virtual machines share a drive, this can lead to variances in latency from 5 to over 50 milliseconds. However, in an all-SSD architecture, the lack of moving components results in consistent latency, regardless of how many applications require I/Os and whether or not the I/Os are sequential or random. In comparison to the I/O bottleneck of a hard drive, SSDs can perform I/O actions in parallel, for example, with eight or 16 channels, which ensures low latency under any I/O load.
Scale-out Architectures Ensure Linear, Predictable Performance with a Scaled System
Traditional storage architecture follows the scale-up model, where one or two controllers are linked with a set of drive units. Capacity is increased by adding drives. One problem with this is that controller resources can only be upgraded by switching to a “larger” controller. If the “largest possible” controller is used, the only way to upgrade is to purchase an additional storage system. This inevitably leads to higher costs and increased administrative work. A genuine scale-out architecture, however, links controller resources with storage capacity. Any time that the latter is increased and more applications are added, performance is also increased. This performance is available for any volume in the system, not only for new data. This is essential, not only for the administrative planning, but also for the storage system itself in terms of consistent performance.
RAID-less Data Protection Offers Guaranteed Performance in Error Situations
In regards to QoS, a RAID approach seriously affects performance when a drive fails – often by more than 50 percent. This is because an error involves a two- to five-fold increase in I/O load for the remaining storage drives. It’s best to use a RAID-less data protection based on a replication algorithm. Therefore, redundant data copies of a single storage medium are evenly distributed over all remaining drives in a cluster – not just on a specific RAID system. The result of this data distribution is that if an error occurs, the I/O load of a failed drive is taken over by the other storage media in the system, i.e., the I/O load on each individual drive increases only slightly.
Balanced Load Distribution Eliminates Peak Loads that Cause Unpredictable I/O Latency
In traditional storage architectures, data is stored in a storage pool within a RAID set. This is usually on the same drive. If new drives are added, they are generally used for new data and not for load rebalancing. The result: Static storage creates unequal load distribution between storage pools, RAID sets, and individual drives. In this scenario, the manual actions of the storage administrator, often via Excel spreadsheets, is the only way to establish an efficient and balanced I/O load distribution and capacity allocation. Other approaches, however, automatically distributes the data across all drives in the cluster. If new drives are added, the data in the system is automatically arranged into several clusters, regardless of whether the data is old or new. Balanced data distribution is therefore possible without any manual intervention. If the system experiences additional workload, it is also distributed evenly. This automatic distribution is the only way to avoid peak loads, which can impair performance.
Only a continuous QoS approach is suitable to ensure predictable storage performance. Modern storage architecture with integrated QoS features comprehensively overcomes not only performance problems, but also offers quick provisioning, not to mention simplified management.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:58p |
Wind Power Deals to Bring Equinix to 100 Renewable in N. America Equinix has signed two wind power purchase agreements which together with an earlier solar deal in California will bring enough renewable energy to the grid to offset energy consumption of its entire North American data center portfolio, the company announced Monday.
The Redwood City, California-based company, the world’s largest data center provider whose facilities house some of the internet’s most important interconnection hubs, agreed to buy energy generated by utility-scale wind farms in Oklahoma and Texas. The agreements cover 225 MW of generation capacity total.
In September, Equinix announced a 105 MW power purchase agreement with SunEdison that will provide enough solar energy to offset the power consumption of its California data centers.
These deals will ensure all energy Equinix data centers consume in North America is offset with wind and solar by the end of 2016, which is when the company expects the three plants to come online. It committed to securing renewables for 100 percent of its global data center operations earlier this year.
While it has become common for big web-scale data center operators – companies like Google, Facebook, and Microsoft – to strike power purchase agreements with renewable energy developers to offset the energy consumption of their massive data centers, few of Equinix’s peers, businesses that provide data center services commercially, have made efforts to address their carbon footprint. The frequently cited reasons have been lack of customer interest, higher rates on renewables, and general difficulty of procuring enough capacity to offset a typical colocation data center’s enormous load.
There are now signs, however, that at least customer interest in having their colocation footprint powered by renewable energy is growing.
Equinix’s is the largest renewable energy commitment to date by a colocation provider, signaling that this is going to become more and more important to compete in the market. Another example is Equinix’s rival and landlord Digital Realty Trust, which since last year has been offering customers premium-free renewable energy anywhere in the world for one year.
Amsterdam-based Interxion, which competes head-to-head with Equinix in Europe, has a goal of 100 percent renewable energy for all of its data centers in the region.
One of Equinix’s new power purchase agreements is with an affiliate of NextEra Energy Resources, which is building the Rush Springs Renewable Generation Facility in Grady and Stephens counties, Oklahoma. The Equinix deal will ensure energy generated by 125 MW of the plant’s capacity will be pumped into the Southwest Power Pool regional electricity grid.
The Wake Wind Energy Facility in being built in Floyd and Crosby counties in Texas by Invenergy. Equinix has made an agreement to buy energy from 100 MW of Wake Wind’s capacity. The facility will plug into the Electric Reliability Council of Texas regional electricity grid. | | 6:38p |
Hewlett Packard Enterprise Releases Docker Solutions Portfolio 
This article originally appeared at The WHIR
Hewlett Packard Enterprise has introduced a new portfolio of solutions built for Docker at DockerCon Europe on Monday. The solutions envelop cloud, software, storage and other services.
Part of the portfolio includes HPE Helion Development Platform 2.0 with support for Docker, which allows users to deploy microservices and includes Helion Code Engine, a continuous integration and continuous deployment service for automating workflow for code.
“Containers are changing the way applications are developed and managed, bridging the gap between IT and developers and helping organizations accelerate their pace of innovation,” said Martin Fink, EVP and CTO, HPE. “Hewlett Packard Enterprise is embracing and extending Docker capabilities by providing a comprehensive set of enterprise class tools and services to help customers develop, test and run Docker environments at enterprise scale.”
HPE told TechCrunch that as enterprises look to hybrid environments, Docker containers become more relevant. The company is planning for formalize its partnership with Docker.
Developers and IT operations can test and monitor Dockerized applications with HPE StormRunner and HPE AppPulse for Docker, and manage and monitor a complete Docker Swarm cluster with HPE Sitescope. HPE Sitescope uses a Docker Swarm address to automatically build a cluster map and monitor five layers of the cluster. The latest update of Docker includes production-ready Docker Swarm.
According to the company, HPE Codar for Docker enables continuous deployment of hybrid workloads with the click of a button. The Docker machine plugin for HPE Composable Infrastructure automates the deployment of Docker container hosts from HPE OneView, which allows IT and DevOps to provision bare metal infrastructure for Docker environments within an organizaton’s data center.
“Developers today require a new model to build, ship and run distributed applications that existing infrastructures were not designed for,” said Nick Stinemates, VP, business development & technical alliances, Docker. “Docker provides application portability, unifying application deployment to any infrastructure, whether on-premises or in the cloud. Docker and Hewlett Packard Enterprise together will drive the next generation of enterprise applications that define truly agile businesses.”
Other solutions in the new portfolio include persistent storage for Docker containerized apps, enterprise-grade container support, and a Docker reference architecture which provides best practices on how to deploy Docker on Converged Architecture 700 with Helion CloudSystem 9.0.
This first ran at http://www.thewhir.com/web-hosting-news/hewlett-packard-enterprise-releases-docker-solutions-portfolio | | 6:49p |
The Evolution of DCIM: Gartner’s Latest Magic Quadrant for DCIM Tools It’s important to quickly understand that cloud computing isn’t going anywhere. In fact, the proliferation of cloud computing and various cloud services is only continuing to grow. Recently, Gartner estimated that global spending on IaaS is expected to reach almost US$16.5 billion in 2015, an increase of 32.8 percent from 2014, with a compound annual growth rate (CAGR) from 2014 to 2019 forecast at 29.1 percent. There is a very real digital shift happening for organizations and users utilizing cloud services.
The digitization of the modern business has created a new type of reliance around the modern data center and the cloud. However, it’s important to understand that the cloud isn’t just one platform. Rather, it’s an integrated system of various hardware, software and logical links working together to bring data to the end-user. So here’s the big question – how do you manage it all? How do you use tools to help you facilitate continuous re-optimization of data center power, cooling and physical space? Most of all – which solutions can also be a powerful enabler of CAPEX and OPEX savings?
When it comes to data centers and the cloud, you can’t efficiently manage what you can’t see. In this latest Gartner Magic Quadrant Report, we focus on DCIM tools, which are leading the market, and how DCIM has effectively evolved.
One of those leaders, Nlyte has distinguished itself as a leader in the market with a product suite that’s available both as on-premises software and as a SaaS offering. Furthermore, Nlyte has focused on integrating the DCIM platform with ITSM systems, allowing for an end-to-end management interface from infrastructure to business process.
View this report here and see how DCIM helps 1) integrate IT and facilities management of a data center, 2) enable achievement of greater energy efficiency, and 3) enhance resource as well as asset management by showing how the resources/assets are interrelated. Plus, you’ll learn why there are more investments into DCIM today and how current leaders are helping revolutionize data center visibility with business processes. |
|