Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, February 29th, 2016
Time |
Event |
1:00p |
Allocating Storage to VMs and Extending to Cloud Storage is one of the hottest IT topics today. Acquisitions are happening regularly, as more users are moving to flash and new types of storage controller ecosystems. We’re seeing powerful hybrid systems emerge and even more impact around extending environments to cloud storage. Throughout all of this, organizations must understand how to utilize these new types of storage resources, and where they apply to their data centers.
The challenge to virtualization and storage engineers is this: How do you manage and work with all the new storage capabilities? Even more important, how can you dynamically manage workload storage requirements within a virtual environment?
Understanding Virtual Machine Storage Requirements
It goes without saying that planning is everything, and everything depends on the unique variables of the environment. Since each data center is different, certain questions must be answered prior to a storage initiative:
- The business needs to understand the scope of virtualization in the environment. Will it have a majority of its systems virtualized, or will it be just a few VMs running?
- More users, more services and more applications will all impact computing resources. You’ll need to accommodate for this. What are your business goals for the environment? That is, have you planned for the future?
Once a plan is established, the engineering team needs to understand what type of storage solution they will be rolling out. Some VMs require a set parameter for their storage requirements, while others can operate more dynamically. Consider these two options:
- Pre-allocate the entire storage for the virtual disk upon creation
- In this scenario the virtual storage disk is deployed as either split over a collection of flat files, typically each one is 2GB in size, collectively called a split flat file, or as a single, large flat file. The pre-allocated storage architecture is also commonly known as “thick-provisioning.”
- Dynamically grow the storage on demand
- Here, the virtual disk can still be implemented using split or single files with core exception: storage can be allocated on demand. This type of dynamic growth storage is also known as “thin-provisioning” (a term that both VMWare and Citrix made popular).
Storage is Finite, so Plan it Out
Almost any experienced IT engineer will confirm that storage is a valuable asset in any environment. One of the biggest problems facing any virtual deployment is storage. To be clear, the issue is not lack of it; it’s the amount of storage available. Oftentimes, an IT manager will purchase several terabytes of disk space only to see it used up very quickly. After about three months of active usage engineers start to notice that almost 70 percent of the originally purchased space has been utilized. So, what happened?
Once storage becomes available, it tends to be used up very quickly. Needed storage space often gets allocated without much planning or initiative. The point is, if you have a SAN with ample amounts of space, use it wisely and plan out its usage. By knowing and understanding what the VMs and workloads in an environment require, an IT engineer can see their storage infrastructure start to work much longer and more efficiently. However, with virtualization constantly growing and the need to migrate old physical servers to a VM state, allocating storage becomes a daunting task. This is where powerful hypervisor and virtualization technologies can really help.
VMware, Citrix’s XenServer, and Microsoft Hyper-V, for example, are deployed with very sophisticated graphical user interfaces that provide a great deal of information. An administrator can see the connected storage repository, how it is being utilized and the space requirements for each VM. Each new update to these types of hypervisors expands this storage-link capability to include more vendors, more features and more control over storage directly at the GUI level. In fact, new features like VMware’s vSAN technology takes the storage conversation into the software-defined layer.
Using the hypervisor’s own GUI, administrators can now monitor, allocate, and manage their space requirements for all VMs. When thin-provisioning (dynamic storage allocation) is utilized for virtual disks, it’s very important to keep track of the unused space in the storage resource pool or data-store.
Over-allocating disk space is an issue where IT engineers are not keeping track of their free storage space. By keeping track of unallocated resources engineers can apply best practices and take steps to either free up space in the existing resource pool or increase the size of that pool before application disruption or downtime occur. To avoid any system downtime, it is recommended to track space usage over time and set alerts or alarms that will call attention to a pending out of space issue.
Remember, dynamic space allocation is nothing new. This feature has been available in most leading hypervisors for a few versions now. However, there are certain best practices to doing this the right way.
- Set an alarm for your space requirements.
- Adding additional space is not difficult. In reality it can be accomplished with about 3 mouse clicks. The challenge here is knowing how much space there is to allocate and if the environment is running out. To resolve this problem an engineer should set alarms with in the hypervisor to properly manage thin-provisioning. These alarms can be customized to trigger alerts upon certain thresholds so that an IT administrator can take the actions required to prevent an out-of-space issue. Alarms can be set on a data-store for a percentage full as well as a percent overcommitted.
- Document and monitor the environment
- Every major hypervisor’s GUI is advanced enough where any IT engineer should be able to look at the storage repository and have a solid idea to where they stand on space. Working with space requirements is a never ending process that requires attention at all times. Running out of space is not a pleasant issue to deal with and can be avoided for the most part by auditing and maintaining the storage environment.
- Keep the storage and hypervisor infrastructure updated
- Watching over the workload is an important ongoing task – keeping an eye on the storage hardware and hypervisor software is just as vital. New hardware and software releases promise better support and feature sets that help IT engineers manage their environments. Small changes can go a long way in managing space needs.
It’s important to remember that every data center and business is unique, and therefore space requirements can span all over the board. However, there are some key best practice tips and notes of caution that every IT engineer should keep in mind.
- Nothing is ever set in stone. Modifying the size of a VM is very common. Some VMs cannot be changed and their space requirements are preset either by the IT manager or by the vendor. However, these examples are few. For the most part, a VM running in a storage pool has the ability to have its storage space modified. Administrators have the ability to add disk space as needed.
- Always monitor your VMs. As mentioned earlier it’s important to know which resources VMs are using at any given moment. Workload management that involves watching VMs perform over time and seeing when storage demands fluctuate allows an engineer to properly distribute resources when needed.
- Know your workloads. Never assume that an application or workload will always run the same. With service packs, additional users and changes in the overall environment, certain workloads can require more storage at any given time.
In today’s ever fluid IT infrastructure, it’s more critical than ever to manage our resources. The hypervisor is the gateway into the cloud. And, with that, we see the span of storage and our data. Allocating storage to your critical workloads is an important task which helps support your business. At the heart of the cloud sit various storage repositories all working hard to manage your virtual machines and data pools. It’ll be up the talented storage architects and the tools that they use to ensure these storage ecosystems continue to run optimally. | 4:00p |
Three Storage Trends Shaking Up the Enterprise Dave Kresse is CEO of InterModal Data.
The storage industry has historically been left out of the conversation when discussing the innovative and ground-breaking feats coming out of the technology world. Now, that focus is shifting thanks to three emerging trends in the enterprise: the move away from integrated systems to software-on-commodity hardware architectures, the focus on utilization rates of physical resources, and the increasing need to support millions of individual workloads.
Companies such as Facebook, Google and Amazon have devoted massive resources to build and maintain customized data center infrastructures from the ground up. In doing so, these companies have realized tremendous levels of scalability, flexibility, and efficiency. Enterprises today are experiencing large and growing amounts of data storage requirements and are focused on achieving the same benefits, driving these trends.
Software Soars
As storage becomes a larger and larger aspect of an overall IT environment, companies are beginning to not only understand, but witness firsthand how expensive the integrated system model for storage is. The business model of these vendors requires them to significantly mark up the commodity components they bundle with their software; and particularly at scale, this becomes exorbitant. Until recently, customers have had to live with this because software-only storage vendors were not delivering the level of reliability, availability, serviceability, and supportability required from enterprise-class storage solutions. However, there is a new generation of software only or software-defined storage vendors who have taken a “systems” approach and are delivering the quality and predictability Enterprises need. The shift away from integrated systems is underway.
Having the proper storage architecture in place can have a direct impact on other aspects of the business. Companies are constantly challenged by exorbitant costs with their integrated systems, as these types of storage architectures only can create a finite amount of capacity and performance controllers. Since these controllers are physically attached to the shelf, when one runs out, both must be replaced. One way companies are fighting back is through software-only models that work seamlessly with commodity hardware. This approach increases reliability, availability and serviceability needed to operate at maximum capacity in a large-scale environment.
Utilization is Key
When it comes to cost, moving to a software plus commodity hardware model helps some, but the real culprit in storage is the high level of unutilized and underutilized resources stemming from the way solutions have been historically architected. Traditional storage architectures physically attach shelves to controllers, isolating a finite amount of performance and capacity within a given system. The problem is that unless a customer exhausts the performance and capacity at the same time, one of those two resources is underutilized within that system. At scale, this adds up to a tremendous amount of waste – waste not only of upfront capital dollars, but ongoing operational waste of space, power, cooling, and systems management. New storage architectures are emerging that are purpose built to address this utilization issue. One relatively new approach, which started inside Facebook, is a disaggregated architecture. A disaggregated storage architecture physically separates the infrastructure into component functions, which are connected over Ethernet. This approach allows enterprises to granularly scale out. If they need more performance, they can add that without having to add more capacity, and vice versa. Increasing resource utilization through disaggregation has in some cases resulted in up to a 10x reduction in the amount of physical resource required to support a given set of performance requirements.
A New Definition of Scale
It was not too long ago when enterprises defined “scale” in terms of how large a file system a storage solution could support. This was in an era when storage was purchased on an application-by-application basis. However, with virtualization now ubiquitous in the enterprise and the adoption of microservices on the horizon, and with a shared storage model becoming the norm in order to ensure efficiency, enterprises now define scale in terms of how many file systems, or connections, a storage solution can support. As the definition of scale has literally been turned on its side, traditional storage solutions have struggled to adjust. Solutions today are now architected to support hundreds of thousands, moving to millions of workloads. The better ones allow customers to define classes of service so that the workloads can be given different levels of resource in order to support disparate performance requirements.
The Storage of Tomorrow, Starts Today
Today, enterprises demand new levels of scalability, flexibility, and efficiency to meet the challenges they are experiencing with the large and growing amount of data they must manage. At the same time, they still require their storage to be reliable, available, serviceable, and supportable. In response to this demand, there has been an unprecedented amount of innovation around the storage market. The traditional storage vendors have not been able to keep pace. New entrants who have been able to deliver on these demands are winning in the marketplace at an ever accelerating rate.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:50p |
MIT, UMass Scientists to Study Solar-Powered Data Centers Renewable energy is tricky to use, and it’s even trickier to use in data centers, which have to be running around the clock, regardless of whether or not the sun is shining or the wind is blowing.
For data center operators that have turned to renewable energy, the three answers have been a) using a combination of renewable generation and energy storage to supplement a data center’s power supply, not replace it; b) investing in renewable energy generation for the same grid that feeds the data center – the grid that also has coal, nuclear, and other traditional energy sources; and c) simply buying Renewable Energy Credits equivalent to some or all energy a data center consumes.
Researchers behind an experimental project in Massachusetts hope to push the progress further by studying over time performance of a solar-powered micro data center launched this month. The test bed is called Mass Net Zero Data Center, or MassNZ. The project’s goal is to help researchers understand how to reduce data center energy consumption and increase data centers’ ability to use renewable energy.
The project is a collaboration between the Massachusetts Green High Performance Computing Center, a supercomputing facility in Holyoke shared by the region’s major universities, including MIT, Harvard, University of Massachusetts Amherst, and the local city-owned utility Holyoke Gas and Electric. It is funded partially by a National Science Foundation grant.
Read more: Cleaning Up Data Center Power is Dirty Work
“There are three major obstacles to research in sustainable data center design: availability of experimental infrastructure to enable realistic prototyping and evaluation, availability of realistic use cases from a state-of-the-art green data center, and real-time visibility into the utility infrastructure that provides data center power,” Christopher Hill, director of research computing for MIT’s Atmospheres, Oceans and Climate program, said in a statement. “The MassNZ addresses all three.”
Prashant Shenoy, a UMass computer science professor, said despite numerous data center design advances, there were still many unaddressed challenges. “How should a data center incorporate renewable sources of energy? How should future data centers interface with a smart electric grid to intelligently reduce their electricity bills? How should we design green HPC applications that intelligently manage power use?”
Hill and Shenoy are principal investigators in the MassNZ project, which is an extension of a project to build a shared computing resource for researchers and students from multiple universities at MGHPCC. The project, called Engaging-1, started with a $1.6 million grant from the NSF.
 The Massachusetts Green High Performance Computing Center is a 15MW LEED Platinum supercomputer data center in Holyoke, Massachusetts. It came online in 2012 (Photo: MGHPCC)
MassNZ is a 200-square foot micro data center, powered by solar panels located next to it. It has renewable cooling systems and energy storage systems, comprised of micro flywheels and batteries. It also serves as a distributed energy storage demonstration project by HG&E, the utility.
The researchers plan to deploy a variety of servers, storage systems, and networking gear in the facility.
They will collect power, cooling, and workload data from micro data center and the 15MW HPC facility next to it to study opportunities to integrate data centers with smart grids, use machine learning and data to model sustainable data centers, and design new power management techniques for HPC applications.
MGHPCC executive director John Goodhue said MassNZ would expand the HPC center’s ability “to serve as a living laboratory or research in sustainable data center design.” | 8:27p |
Telx Beefs Up Digital Realty’s Interconnection Revenue and Cloud Strategy Telx, the colocation and interconnection company data center giant Digital Realty Trust acquired last year, contributed $13 million in annual revenue from new leases signed in the fourth quarter of last year – the first quarter completed after the $1.9 billion acquisition closed.
While it made an attempt to go public in 2011, the plan was cancelled and Telx remained a private company. As a result, little has been known about its financial performance. Now that it’s part of one of the world’s largest publicly traded data center REITs, there’s a new degree of transparency.
Notably, since Telx’s robust interconnection business was what made it attractive, $7 million of its revenue contribution will come from interconnection services sold during the quarter. The remaining $6 million will be from data center space and power.
While Telx is contributing new revenue, Digital has had to write off some revenue as a result of the acquisition, going from quarterly income to quarterly loss. The company went from net income of about $58 million in the third quarter to net loss of about $17 million in the fourth quarter of 2015, attributing the loss primarily to “the write-off of straight-line rent receivables related to Telx.”
Digital Realty’s total revenue for the quarter was $500 million – up from $412 million it reported for Q4 2014. Its full-year 2015 revenue was $1.76 billion – up from $1.62 billion in 2014.
Using Interconnection to Draw Cloud Users
Telx’s interconnection capabilities and the interconnection ecosystems it has built in its meet-me rooms over the years are a major part of Digital Realty’s new strategy. Put simply, Digital is offering cloud service providers access to those interconnection hubs along with large chunks of wholesale data center capacity nearby, so they have room to grow.
Digital expects cloud providers to reel in enterprise customers looking for private network connections to their services. The two primary cloud providers who are part of this strategy Digital has named are IBM and AT&T. Another customer that plays a big role is Equinix, which doesn’t provide cloud services itself but has all major cloud providers as its customers, who are attracted to Equinix for similar reasons they are attracted to Telx but at a much larger, global scale.
Read more: Digital Realty Leans on IBM, AT&T to Hook Enterprises on Hybrid Cloud
By relying on these service providers to attract enterprise customers, Digital Realty is leveraging a salesforce that’s larger than its own and also avoids directly competing with AT&T, IBM, and Equinix, who are some of its biggest customers. “It’s almost like outsourcing a salesforce,” John Stewart, senior VP of investor relations at Digital, said. “If they’re successful, we’ll be successful.”
Hard to Keep Up With Demand in N. Virginia
One of the first places where this new strategy will be seen in action is Northern Virginia, the biggest and most active data center market in the US. San Francisco-based Digital Realty has a lot of existing data center capacity there, but, seeing a huge amount of demand in the market, has bought an additional 2 million square feet of land in Ashburn, with access to 150MW of power.
There was less than 20MW of unused data center capacity available in the Northern Virginia market in the fourth quarter, according to Digital Realty’s internal estimates. That includes both finished data center space and space that’s currently under construction. There is more than 20MW of planned new construction in the market – more than in any other US data center market.
Read more: Digital Realty Takes Foot off Brake Pedal on Expansion
The only market that’s somewhat close to those figures is Chicago, where more than 20MW is currently available and just under 20MW of new construction planned.
“We are releasing space like hot cakes in Northern Virginia, and we literally can’t keep inventory on the shelves,” Stewart said.
Northern Virginia, Chicago, and Dallas are three of the markets companies generally go to today if they want a national-scale data center infrastructure in the US, he said.
Oracle, for example, recently leased 4.5MW of data center capacity with Digital in Ashburn and 5.5MW in Franklin Park, a village outside of Chicago, according to the commercial real estate firm North American Data Centers. Oracle has been expanding its data center capacity globally as it ratchets up its enterprise cloud business.
Microsoft also recently took 7.2MW with Digital Realty in Franklin Park.
Another example of a recent national-scale customer win for Digital is Uber. The mobile ride-hailing startup took 6MW at a Digital Realty data center in Dallas and 4MW in Ashburn last year, according to North American Data Centers. Uber also took 4MW in Santa Clara, California, with Digital’s competitor CoreSite Realty Corp.
Read more: Who Leased the Most Data Center Space in 2015?
Expanding Internationally
Digital Realty recently added a whole new market to its portfolio, buying a piece of land in Frankfurt, a market it said it had been eyeing for some time now.
“It’s a market that we have essentially followed customer demand into,” Stewart said. “We’ve heard over and over that Frankfurt was the market where [customers] wanted to be.”
Frankfurt is one of Europe’s most active data center markets. It is a major network interconnection hub and a key internet gateway into Eastern European markets.
Digital Realty, which currently has 22 data centers in Europe, bought a 27-acre parcel in Frankfurt, with access to 27MW of power capacity. The site will accommodate a three-building data center campus, the company said.
Frankfurt was one of the major gaps in Digital’s global coverage, Stewart said. Another gap is Tokyo, he added. So far, its Asia Pacific strategy has been focused on Singapore, Hong Kong, and Australia, but it’s likely that Digital will step into Tokyo in the near future, adding a second location in Japan to its existing Osaka data center. | 10:41p |
How IT Decisions Impact Your Data Center The modern business is directly tied with the capabilities of IT. Most of all, your data center now impacts how you create business goals and entire strategic directives. This means that business leaders and data center facilities managers must work in unison to create a truly cohesive ecosystem.
And decisions and actions on the IT side of the house can have a profound impact on mechanical systems and resulting operating costs and capacity of the data center.
When all sides of the house collaborate, there are specific benefits to the business and the entire data center environment. Consider these top challenges that collaboration aims to overcome:
- There is a lot of money being left on the table in the form of unrealized operating cost savings.
- There is a lot of stranded capacity forcing redundant capital expenditure.
- The lack of a shared understanding and cooperative effort causes a great deal of friction and wasted man hours.
At the Data Center World Global conference in Las Vegas, coming in March, learn how data center facilities and IT staff can collaborate more effectively.
In one of the sessions, Lars Strong, senior engineer at Upsite Technologies, will discuss the critical elements in effective collaboration. With extensive experience around data center optimization, Strong is a data center thought leader who regularly designs, develops, and contributes to complex technical specifications around data center environmental management technologies.
Read more: What IT Managers Need to Know about Data Center Cooling
When it comes to the modern data center, there are several challenges, according to Strong:
- Because of advances on both the facilities and IT sides of the business, it is no longer possible for the two organizations to operate independently; not if an organization is to be competitive.
- Understanding IT equipment delta T and its relationship to cooling unit delta T is crucial to optimizing efficiency of the cooling infrastructure.
- AFM best practices are not just nice to have; they significantly affect overall operating cost of the facility.
“By understanding these relationships, IT and facilities management should be able to form a more cooperative approach to managing the data center, resulting in a more effective and efficient operation, thereby better fulfilling often contradictory objectives,” Strong said.
“The end goal in discussing these considerations is to have an agreement between IT and facilities as to how these decisions impact the other. To ensure efficiency and optimize the data center, both teams need to work together and understand the impact that the aforementioned considerations will ultimately have on the data center environment.”
Want to learn more? Join Upsite’s Lars Strong and 1,300 of your peers at Data Center World Global 2016, March 14-18, in Las Vegas, NV, for a real-world, “get it done” approach to converging efficiency, resiliency and agility for data center leadership in the digital enterprise. More details on the Data Center World website.
|
|