Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, September 1st, 2015
| Time |
Event |
| 12:04a |
Lithium-ion Battery Prices Expected to Plunge 60 Percent by 2020 We know that technology behemoths like Microsoft and Facebook have successfully implemented lithium-ion battery systems in their data centers, but for small- or mid-sized facilities, they are often cost prohibitive.
That may change drastically over the next five years, according to a recent study by Australian consultancy AECOM. Although the report indicated that all battery technologies are likely to drop in price over that period, Li-ion types may experience the largest dip of all – 60 percent – by 2020.
In other words, Lithium-ion batteries could drop from $550 per kilowatt hour (kWh) in 2014 to $200 per kWh by 2020. This could make more than a few data center managers consider making the switch from traditional lead-acid batteries to those with higher energy density, minimal maintenance and longevity.
In a data center environment, these benefits translate into savings in space, weight and replacement that directly contribute to the bottom line. Li-ions make it possible to get 6kW of power in a rack in a 2U package, weighing less than 100lbs. Higher cycle life means a lithium-ion UPS can last up to seven years without service.
Microsoft says its Lithium-ion battery system is five times cheaper than traditional UPSes. They also take up 25 percent less floor space because they’re installed directly within the server racks, reported PC World in an article. | | 12:16a |
VCE Unveils Hyperconverged Rack Configured as VMware SDDC Moving to bring the concept of hyperconverged infrastructure to IT environments running VMware on rack systems, VCE unveiled an implementation of the VCE VxRack System that comes preconfigured with VMware EVO SDDC software at the VMworld 2015 conference today.
Based on the same software-defined data center (SDDC) technology that VMware created for blade servers, VMware EVO SDDC is an instance of an SDDC that has been specifically designed to run on rack servers. As a sister company of EMC, VCE is making EVO SDDC available on rack servers alongside the blade servers that VCE pre-configures with EVO:RAIL software.
Nina Margus, chief marketing officer for VCE, said that in general IT organizations are realizing they are spending a lot of time on lower level hardware and software integration tasks that don’t add much in the way of real value to the business.
“We let customers spend less time non-differentiated tasks,” said Margus. “Our goal is to simplify infrastructure all the way down to the hypervisor.”
That issue is particularly vexing, noted Margus, at a time when IT organizations are under more pressure than ever to build private clouds. The VCE systems not only simplify the building and deployment of those clouds using VMware software, Margus said the VCE systems also come with lifecycle management tools that make it easier to manage those clouds over the long term.
Scheduled to be available to order in the fourth quarter, Margus noted that the VxRack System based on VMware EVO SDDC is engineered out of the factory in a way that is intended to enable IT organizations to be more agile in terms of how IT services are provided. In contrast, most existing rack system require a lot of additional systems integration work. Traditionally, however, IT organizations that prefer rack systems have preferred to use racks as a way to scale up and out compute, storage and networking independently from one another.
The degree to which IT organizations may be willing to make a shift to a more prescribed approach to rack systems remains to be seen. However, Margus said the VxRack System based on VMware EVO SDDC is also at the heart of the Federation Enterprise Hybrid Cloud platform rolled out today under the auspices of The EMC Federation, which consists of products and services spanning most of business units that make up EMC. As such, it’s likely in the months and years ahead a lot more IT organizations are about to be exposed to hyperconverged rack systems. | | 12:00p |
Latency, Bandwidth, Disaster Recovery: Selecting the Right Data Center In selecting the right type of data center colocation, administrators must thoroughly plan out their deployment and strategies. This means involving more than just facilities teams in the planning stages. The process to select a good data center has to involve not only the physical elements of the facility but the workload to be delivered as well.
Are you working with web applications? Are you delivering virtual desktops to users across the nation? There are several key considerations around the type of data or applications an organization is trying to deliver via the data center.
Network Bandwidth and Latency: With the increase of traffic moving through the internet, there is a greater demand for more bandwidth and less latency. As discussed earlier, it’s important to have your data reside closer to your users as well as the applications or workloads which are being accessed. Where data may have not fluctuated too much in the past, current demands are much different.
- Bandwidth Bursts. Many providers now offer something known as bandwidth bursts. This allows the administrator to temporarily increase the amount of bandwidth available to the environment based on immediate demand. This is useful for seasonal or highly cyclical industries. There will come a time when for a period of business operation more bandwidth is required to help deliver the data. In those cases, look for partners who can dynamically increase that amount and then de-provision those resources when they are no longer being used.
- Network Testing. Always test your network and the network of the provider. Examine their internal speeds and see how your data will act on that network. This also means taking a look at the various ISP and connectivity providers being offered by the colocation provider. Many times a poor networking infrastructure won’t be able to handle a large organization’s ‘Big Data’ needs despite potentially having a fast internet connection. Without good QoS and ISP segmentation, some data centers can actually become saturated. Look for partners with good, established connections providing guaranteed speeds.
- Know Your Applications. One of the best ways to gauge data requirements is to know and understand the underlying application or workload. Deployment best practices dictate that there must be a clear understanding of how an application functions, the resources it requires and how well it operates on a given platform. By designing the needs around the application, there is less chance that improper resources are assigned to that workload.
Balancing the Workload, Continuity and Disaster Recovery: Selecting a colocation provider goes well beyond just choosing their internal features and offerings. Companies looking to move to a provider platform must know what they are deploying, the continuity metrics of their infrastructure and incorporate disaster recovery into their planning.
- Workload Balancing. When working with a data center provider, design your infrastructure around a well-balanced workload model. This means that no one server is over-provisioned and that each physical host is capable of handling the workload of another host should an event occur. Good workload balancing will ensure that no one system is ever over-burdened. This is where a good colocation partner can help. Many times monitoring tools can be used to see inside the workload to ensure that the physical server running that application is operating optimally. Sometimes features offering dynamic workload balancing are available. If that is a requirement, make sure to have that conversation with your colocation partner.
- Business Continuity. In a business continuity model, the idea is to keep operations running optimally without disruptions in the general infrastructure. One of the best ways to understand business continuity metrics is to, again, conduct a BIA. By having documentation available showing which workload or server is most critical, measures can be taken to ensure maximum uptime.
- Disaster Recovery. A core function of many colocation providers is their ability to act as a major disaster recovery component. In working with a partner, select a design which is capable of handling a major failure, while still recovering systems quickly. There is really no way of telling which components are more critical than others without conducting a BIA. Without this type of assessment, an organization can miss some vital pieces and severely lessen the effectiveness of a DR plan. Once the DR components are established, an organization can work with a colocation provider to develop a plan to ensure maximum uptime for those pieces. This is where clear communication and good DR documentation can really help. The idea here is to understand that a major event occurred and recover from that event as quickly and efficiently as possible. A good DR plan will have a price associated with it, but from a business uptime perspective, it’s worth it.
Various technologies can affect how well a data center performs. The distance the data has to travel and the amount of bandwidth provided by a colocation provider can mean the difference between a great user experience and a failed colocation deployment. Cloud computing has created a greater dependency on WAN technologies and virtualization has enabled significantly more powerful servers and more dense storage. With these new technologies come new considerations around how this type of data is being stored and delivered. When selecting the right colocation provider, make sure that their infrastructure is capable of growing with the needs of your organization. | | 3:00p |
Three Tips on Data Center Employee Safety Markham Hurd is a Senior Consultant at Antea Group.
Data center facilities contain a company’s most sensitive and business-critical information, and as our modern world increasingly relies on mobile and internet data, the importance of these facilities intensifies. New data centers are constantly being built, and existing data centers are expanding and upgrading their equipment to meet future needs.
Consider this: The global data center construction market will grow from $14.59 billion in 2014 to $22.73 billion by 2019, according to a recent report from research firm Research and Markets.
The lifeblood of a data center’s success is to maintain a 24-hour runtime environment with efficient equipment. However, ensuring these two things isn’t enough; the data center must also keep in mind the most indispensable element of the data center – the workers themselves.
There is a great deal of engineering, planning, and operational maintenance needed to meet the high demands for continuous data center operations, however, an environment suitable for computer equipment may not be as safe for employees as it could be. Moreover, the data center equipment and backup power systems pose many potential hazards: high voltage electrical panels and circuits, UPS batteries, flammable materials, equipment maintenance at elevated heights, movement of heavy or awkward equipment, and fuel storage and handling are just a sampling of the potential risks. Even the most skilled and trained workers are vulnerable to serious injury if any safety precautions are overlooked.
It is crucial to invest the requisite time, money and resources to ensure safe practices in the data center and to reduce the risk of workplace injuries. Here are three tips from environment, health and safety (EHS) industry experts for keeping worker safety a priority:
Engage the Right People Early
In building a data center, pre-planning to address EHS concerns is always better and much less costly than retrofitting. While timelines are usually tight during data center construction or expansion, involve operations and EHS managers during the design and initial launch of the facility to ensure that proper EHS requirements are met for the workers. Many data centers today need to make expensive changes to comply with EHS regulations that were not considered or addressed during the initial design steps. Remember that safety is a critical management-of-change component when any equipment or procedure changes in a working environment.
Enhance Awareness of Potential Hazards
Keep a list of safety hazards and update it as equipment or procedures are changed. Some hazards that all workers should be aware of include:
- Electrical hazards and high voltage panels and equipment
- Powered and manual loading and handling of equipment
- Work in high temperature areas (hot aisles, exterior areas with little ventilation)
- Work in loud equipment noise areas
- Working at heights
Prioritize Safety Training
For worker’s safety, go beyond the initial training sessions and periodically provide updates to raise awareness and knowledge of common hazards. New employees must be properly trained, and then accompanied by trained personnel before they are left to work on their own. All personnel working in the data center should also attend certified safety classes on an annual basis. Maintain a log of near misses and incidents to communicate lessons learned to all employees and to keep everyone accountable for each other’s safety. EHS personnel should strive to create an open and communicative safety culture so that the workers feel that they are a critical part of the EHS team.
Building a strong safety culture with EHS managers, who reinforce safety and support for the people running such a vital part of the organization, makes for an excellent work environment attracting the top professionals that all data centers require.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:30p |
Internap Scores LEED Platinum for New Jersey Data Center Internap has picked up a pair of energy certifications for its data center facility in Secaucus, New Jersey
Randy Ortiz, VP of data center design and engineering for Internap, said that for the first time an Internap data center facility has achieved a LEED Platinum certification by the US Green Building Council. At the same time, Internap also revealed that its New Jersey data center has achieved an Energy Star certification from the US Environmental Protection Agency.
“As an industry data centers consume quite a lot of power,” said Ortiz. “Anything that we do to address that also generates cost savings.”
While the Energy Star rating focuses primarily on energy efficiency of power supplies and overall power consumption, LEED addresses the sustainability of a data center, including the building materials used to build it, the indoor environmental quality, and usage of water. LEED also considers external sustainability factors, such as employee access to public transportation; whether the building is addressing environmental priorities for the geographic region it’s located in; waste management; and even pest control.
To address that challenge, Internap in its data center facilities makes use of outdoor air economizers, variable frequency drives on chillers and pumps, and cold aisle/hot aisle containment zones.
Internap additionally holds a mix of LEED, Green Globes, and Energy Star certifications at its data centers in Atlanta, Dallas, Los Angeles, and Santa Clara. In fact, Internap noted that its Santa Clara data center was the first in the US to achieve the Green Globes certification, a green building assessment program that helps commercial building owners advance environmental performance and sustainability.
Over the last few years energy consumption has become both a cost and a public relations issues for operators of web-scale data centers.
As a consequence, many IT organizations often now have mandates to reduce their carbon footprint, which includes any carbon generated by companies that provide services to that company. As such, energy consumption by data center colocation services has become not just a cost issue, but also a major brand image concern. | | 4:33p |
Not All Clouds are Created Equal – Comparing Windows IaaS Environments The modern cloud continues to grow and expand as more businesses drive the market forward. We’re seeing more use-cases, more demand, and a lot more users accessing cloud services.
In working with a variety of industries and verticals, one of the most critical points to understand is that your cloud must be wrapped around your business use-case. That means understanding the cloud provider, the kinds of services you require, and the long-term capabilities of that cloud ecosystem. Most of all, it’s important for your organization to know that there are a lot of cloud options out there, and they are not all equal.
In this whitepaper, we dive into this very topic: where cloud platforms differ, where there are certain advantages to one over another, and what you should be looking for in a solution. When it comes to cloud, it’s important that the provider be able to handle your workloads and the virtual environment on which they sit. To really gain a deeper understanding, we look at:
- vCPU Performance
- Memory Performance
- Storage Performance
- Internal Network Performance
From there, you begin to see why performance matters for your business and for your VMs. When you look at various cloud service providers, you will see different metrics. Therefore, if your VMs require specific performance metrics, it’s critical to validate your numbers prior to signing any agreement. This practice allows you to create detailed findings around the right cloud provider and the best price-to-performance ratio for what you’re trying to accomplish.
Download this whitepaper today to learn more about comparing Windows IaaS environments and the specific findings around Amazon AWS, Expedient, and Microsoft Azure public clouds. You’ll understand where VMs perform optimally, where there are shortfalls, and how you can design a cloud architecture which directly fits your business needs. | | 6:37p |
Webair Launches Private Cloud in Cologix Montreal Data Center 
This article originally appeared at The WHIR
Webair has launched its VMware-based private cloud solution out of Cologix MTL2 Montreal data center, the company announced on Tuesday.
Launching the service in Montreal was strategic decision based on demand and customer requests, and allows Webair to leverage Cologix’ strong uptime performance, responsive “Remote Hands” service, robust connectivity, and dynamic ecosystem, Webair said in a statement.
Cologix Canada GM Scott Adams noted that Montreal is “an emerging hub for cloud service providers, based on its position as a gateway market to Europe and its attractive power rates.”
The companies say Cologix’ MTL2 facility is the most connected in Montreal, providing choice and low switching costs, which allow Webair to optimize its network agility and provide higher performance and lower costs to its customers.
“We like two elements of what Cologix is doing: 1) focusing on ecosystems in ‘Edge markets’ and 2) avoiding channel conflict by leaving the cloud and managed services space to best-in-breed competition,” noted Webair Chief Technology Officer Sagi Brody. “By deploying our enterprise-grade Private Cloud in the MTL2 facility, we provide customers colocated in the data centre with access to a fully managed and custom-built cloud platform that addresses their business-specific IT security, scalability and performance needs.”
Montreal-based Ormuco launched a hybrid cloud offering in June, shifting away from its traditional core business to meet growing local cloud demand.
This first ran at http://www.thewhir.com/web-hosting-news/webair-launches-private-cloud-in-cologix-montreal-data-center | | 9:07p |
Cumulus Networks Makes a VMware Connection The adoption of open networking products just gained a big-time boost at VMworld 2015. Cumulus Networks announced that Dell and Quanta Cloud Technology (QCT) will bundle its open networking software along with software from VMware, reported our sister site, The VAR Guy.
The bundling arrangement goes like this: Dell and QCT will configure systems based on VMware (VMW) EVO SDDC software for hyperconverged rack systems with Cumulus Network software. This will allow those systems to be integrated with “white box” bare metal switches based on commodity processors.
This integration will go a long way toward simplifying the deployment and management of switches based on a Linux operating systesm for traditional enterprise IT organizations without the engineering skills needed to do so.
Launched in 2013, Cumulus Networks has seven open-hardware partners, 80-plus solution partners and more than 1 million switch ports worldwide, The company powers data centers ranging from small businesses and universities to enterprises and some of the world’s largest cloud providers.
The original post can be found at: http://thevarguy.com/virtualization-applications-and-technologies/090115/cumulus-networks-makes-vmware-connection.
|
|