Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, April 21st, 2015
| Time |
Event |
| 12:35a |
National Lab Reins in Data Center Management Chaos Scott Milliken hates the often-heard saying that people are the biggest reason for data center outages. It’s not people, he says; it’s people who don’t know what they’re doing.
And it’s fair to say that most of the scientists that are customers of the data centers he runs don’t know what they’re doing when it comes to data center management.
Milliken is the computer facility manager at the U.S. Department of Energy’s Oak Ridge National Laboratory in Oak Ridge, Tennessee. Speaking Monday at the Data Center World conference in Las Vegas he talked about the challenge of managing one of the most chaotic types of data center environments and what he and his team did to rein in the chaos.
The close-to-15-year-old ORNL data center is a polar opposite of the data centers the likes of Facebook or Google operate. Those hyperscale facilities support extremely homogeneous IT equipment, and as a result are able to maximize standardization and reach extreme efficiency.
Partly because of the nature of the workloads running in ORNL’s data centers and partly because of the way the government funds its research projects, standardization at a level anywhere close to the level of standardization in hyperscale facilities is simply impossible.
Colo for Government Research
Milliken and his team provide data center services to a large group of users, each responsible for buying IT equipment to support his or her own computing needs. “We were almost like a colo for government research institutions,” Milliken said.
The data center has two stories, 20,000 square feet each. The first floor houses the lab’s three supercomputers, and the second floor is where scientists’ gear lives.
Historically, there wasn’t a formalized process for placing new equipment on the second floor. Researchers used their grant money to buy servers, racks, airflow-management solutions, and in some cases power distribution equipment.
Because grant money is scarce, Milliken’s customers got territorial about data center space they had been allocated and equipment they had purchased. If somebody paid for a power-panel upgrade, they were inflexible about who could and who could not use the panel, for example.
“Fiefdoms were created and maintained,” he said. “Just kidding. They were not maintained at all.”
And that was one of the biggest problems. Clear documentation and labeling of equipment is crucial in effective data center management, and most of Milliken’s customers weren’t very disciplined about these things.
Chaos is Unsustainable
The chaotic environment often led to availability issues, and the data center management team often found themselves spending time on resolving problems. The status quo was clearly unsustainable.
So they decided to make improvements by instituting new processes. One was strict enforcement of documentation.
The other was taking over the responsibility for supplying racks, airflow-management, and power distribution equipment. This was a good way to lessen the data center management burden on tenants and to standardize the infrastructure components coming into the facility. They also realized that it would cost them less to pay for the infrastructure equipment than to continue spending long periods of time on resolving problems that resulted from operating a chaotic environment.
Starting From Scratch
Very soon, however, the team realized that to really do things right, they needed a whole new data center, which is what they did. The new facility came online about one and a half years ago, Milliken said.
Since it was launched, no new equipment goes into the old facility. They have standardized the way they deploy servers in the new data center to the maximum extent possible given the nature of their clientele.
The team now uses a standard contained pod that includes 22 to 28 cabinets and has in-row coolers and its own electrical circuits. Using this approach has given Milliken and his staff visibility and control of their costs and timelines beyond anything that was possible in the old facility.
There are no more users rolling in data center cabinets of their own. There are no more wildly varying configurations. Expansion has become “predictable and repeatable,” he said.
The old data center will not be decommissioned. As different pieces of equipment it supports reach their end of life, they will gradually be phased out, and new replacements (if necessary) will be installed in the new facility. Once the old one is empty, Milliken’s plan is to gut and remodel it into a modern facility and manage it in the new more effective way. | | 2:05a |
Data Center Design Needs to Accommodate Diversifying Client Requirements Design for flexibility and build incrementally in way that ensures quality remains or goes up as you near completion, said Dave Leonard, chief data center officer at ViaWest in a Monday Data Center World presentation.
When designing data centers, tension exists between quality, cost, and energy efficiency, he said, and design is about managing the tradeoffs.
“Think of stretching a rubber band across contradictory design objectives,” said Leonard. “In the average corporate data center, it’s very high quality, but cost and energy efficiency pulls back in. The social media guys started focusing on cost and energy efficiency but the quality suffers. They don’t need quality because their apps can sustain failures, so they’re able to do things that most enterprises can’t.”
Quality is referred to in the traditional sense. Fault tolerance is done differently today than traditionally. “The old way was duplicating everything,” said Leonard. “Everything is half loaded, so you sacrifice on energy efficiency.”
Leonard said that ViaWest’s design is centered on allowing flexibility across several variables to accommodate diverse needs. “The form factor will change significantly, but the macro trend of rising densities will be more consistent,” he said. “Customers want different volatages, mechanisms, etc.”
Customer needs are diversifying on the whole. In order to allow this flexibility, ViaWest takes a different approach to building incrementally.
The way the company builds in phases means it first delivers half the total space with minimal infrastructure to support, so it only has to worry about adding capacity going forward. In this approach, the other space can be divorced from the infrastructure if customers simply aren’t demanding the density.
“It doesn’t look like equal pods,” said Leonard. “We don’t want to tie infrastructure to data halls. You’ll run out of floor space and/or power.”
In a traditional data hall approach, infrastructure and space are married, which gives rise to complications if that space isn’t used in the way you expected it to be. In colocation, the likelihood is very high that the data center will not consist of uniform racks.
“That ability to play Tetris is important in colo,” said Leonard.
The old enterprise approach to quality described above was to duplicate everything. Rather than 50 percent utilization in an A+B configuration, in Viawest’s design 12 megawatts supports 9 megawatts in round-robin-style UPS. From a rack perspective, the customer still has that redundancy; only in this setup, one rack can be A+B, while the next is A+C.
The downside is the setup requires a lot of time and attention, said Leonard. The UPS design saves from having to purchase more but means more time and care is spent ensuring it’s configured properly.
Leonard is a big proponent for a standardized data center design that allows as much flexibility as possible. That flexibility means that phasing looks a little different than the standard incremental build-out.
The ability to react to what a customer needs is important. But Leonard also spoke of the importance of standardizing and replicating. Data center design needs to be flexible but consistent. However, you can’t standardize from a geography and environmental standpoint, so ViaWest doesn’t completely mimic the design in each market. The same data center in different states needs different approaches.
The next two data centers for ViaWest will be in Oregon, which employs that standard design, and in Calgary, which doesn’t. Calgary is going off a different script because it is a different size than the rest.
There’s a tradeoff when you consider making design changes and that tradeoff has to be compelling. “Perfect is the enemy of good enough,” is Leonard’s mantra. | | 12:00p |
Microsoft’s Seven Tenets of Data Center Efficiency Paul Slater thinks robots replacing some of the humans working in data centers today is not only a real possibility but something that’s likely to happen within the next decade. And if you’re designing a data center today that you’re planning to use for longer than 10 years, you should probably think about what that means for your design.
As the field of robotics shifts away from static “dumb” robots that have resulted in inflexible manufacturing facilities toward more versatility, and design of data centers and especially data center hardware move toward more standardized commodity equipment where individual components can be easily replaced (see Open Compute Project), “we’d expect to see robots much more inside the data center,” Slater said.
Slater is director of the Applied Incubation Team at Microsoft, where he is deeply involved with the company’s data center strategy. He presented on that strategy at the Data Center World conference in Las Vegas Monday.
While robots in data centers are a thing of the not-too-distant future, Microsoft already has some of the most efficient data centers in the world. Slater has started an initiative within the company to share the ways it achieves data center efficiency with the world and find areas that can apply to smaller enterprise data centers, whose challenges may be very different from homogeneous hyperscale facilities.
Microsoft’s data center efficiency strategy has seven key tenets:
1. Design the Data Center for Its Environment
There’s no one answer to the question of what makes a great data center. That’s why there are so many vendors selling such diverse solutions into the space, and none of them are effectively winning over the others.
A big reason for that is location. You can be in a place where space and power are cheap, in which case you can build a sprawling data center that’s efficient and reliable because it has relatively low density per rack. If you want a data center next to the New York Stock Exchange in Manhattan, you’re playing a very different game, where every square foot and every watt matter a lot.
This is why site selection always precedes data center design at Microsoft. The process takes into account the environment, cost and availability of power, proximity to the grid and the grid’s reliability, political implications of locating in a certain place, as well as tax implications.
“Only when the site selection is done are we looking to complete the design for that environment,” Slater said.
2. Design a Data Center Full of Standard Stuff
The most efficient data center is one that is full, so Microsoft always fills its facilities up as soon as possible. It’s also important to fill the data center with as much standard equipment as possible. This makes management of the assets through the use of software tools, such as Data Center Infrastructure Management, more effective.
DCIM is great, but it’s great only if you know the behavior of all the pieces of gear inside your data center, Slater said. Because everything is the same in Microsoft data centers, DCIM is “extremely powerful” for the company.
3. Design for Flexibility
You have to build into your design the ability to adapt to changes. Because technology changes so quickly, you have to assume equipment inside your data center is going to change and that you will not know exactly how it is going to change.
The move to SSD in storage, for example, has implications on thermodynamics in the data center, Slater said. Also, the robots are coming, remember?
4. Automate
“We ruthlessly standardize, and we ruthlessly automate,” Slater said. The two go hand in hand, because it is easier to automate management of a homogeneous environment than one with a wide variety of different systems.
One of the biggest standardization efforts at Microsoft happened only about two years ago, when the company switched from having every cloud service being supported by servers best suited for that particular service to a hardware strategy that consists of only three different SKUs. The company donated the designs of its new servers to the Open Compute Project last year.
5. Design the Data Center as an Integrated System
If you want data center efficiency, design “from the top down,” Slater said. You have to start with assessing the applications or services the data center is going to support, and make design decisions based on that knowledge.
If successful, you end up with a highly integrated system, where every moving part works to support the application in the most optimal way.
6. Rely on Resilient, Flexible Software
Instead of ensuring things don’t go down by doubling up on power and cooling gear, build resiliency into software. Software generally gets better over time, while hardware just gets older, so a software investment looks better three years down the line.
Since Microsoft has transitioned to being a cloud service provider rather than a company focused on selling software licenses, it has been running its own software at much larger scale than its customers run it, so its engineers have learned a lot about designing resilient software. The single biggest implementation of Exchange, for example, is Microsoft’s Office 365 service.
7. Design a Data Center That Will Be Ready to Operate Quickly
The faster you bring online a data center that will support a particular service the better. It will mean the software is written for the best available hardware, and the data center will be designed to support that hardware. Slater calls it “riding Moore’s law.”
Not everything here will apply to enterprise data centers or colocation facilities. Standardizing on a single hardware platform, for example, can be ruled out right away. But maximizing the degree of standardization in a facility can help a lot, as we learned in another Data Center World presentation Monday – one by Oak Ridge National Laboratory’s computer facility manager Scott Milliken.
Most enterprise data center operators also don’t have the scale and the buying power the likes of Microsoft have. Despite those differences, however, Slater believes there are still lessons in the way hyperscale operators set up their infrastructure that can be valuable for smaller facilities. | | 3:00p |
CenturyLink Buys NoSQL Database-as-a-Service Firm Orchestrate CenturyLink, the Monroe, Louisiana-based telecommunications and data center services giant, has acquired Orchestrate, a startup that provides a variety of NoSQL databases as a service through a single API (application programming interface). Financial terms of the transaction were not disclosed.
While CenturyLink hosts and manages traditional Oracle or Microsoft SQL databases as services for some of its customers, this will be the first time the company will be offering NoSQL Database-as-a-Service in the expression’s true sense, the way Amazon Web Services and Microsoft Azure have been, Jonathan King, vice president of platform strategy and business development at CenturyLink, said.
The NoSQL database space has really evolved over the past several years, and developers often pick multiple databases for a single application because of the rise of new types of databases.
The Database-as-a-Service deal is another acquisition CenturyLink has made to go after the developer market. The company bought Infrastructure-as-a-Service provider Tier 3 and Platform-as-a-Service provider AppFog in 2013, going after the same group of users. Last year, it acquired cloud disaster recovery company DataGardens and a big data analytics firm called Cognilytics.
Running a variety of NoSQL databases on the backend, Orchestrate offers an API that supports whole-text search, graph, time series, key value, and geospacial queries – the types of queries necessary for building modern Internet-of-Things, mobile, or connected-web applications, Orchestrate CEO Antony Falco said.
The company, whose 10 staff members will be joining CenturyLink, is based in Portland, Oregon.
It has already expanded the number of data center locations its services are hosted at from two to six as a result of the acquisition. It added four CenturyLink locations (East Coast, West Coast of the U.S., U.K., and Singapore) to the previously existing AWS cloud locations on the East Coast and in Europe.
In addition to gaining new Database-as-a-Service capabilities, CenturyLink was attracted to the deal by the level of talent at Orchestrate, King said. Another reason was the need of a solution like Orchestrate in the giant’s own development efforts. | | 3:30p |
The Intelligent Data Center Tim Hazzard serves as president of Methode Electronics – Data Solutions Group, which provides data center infrastructure management (DCIM), active energy, cabinet, cabling and customized data center solutions throughout North America and Europe.
Data center designs represent a complex ecosystem of interdependent technology and processes. Solutions are rarely black or white, and sometimes an outside resource or consultant is needed to help make order of chaos. For data centers, a fully integrated data center infrastructure management (DCIM) solution can be the answer.
Key to data center operations is their reliability. A data center’s infrastructure must be available 24/7 to meet the increasing demands of today’s virtual commerce and activity. If any part of this environment fails or has performance issues, the business slows. While higher-level performance monitoring has been available for some time, granular details have been hard to collect and analyze. DCIM tools now allow for an unprecedented ability to collect and perform predictive analysis of potential failures.
In a small data center with few variables, physical monitoring could be manageable for a short time. Staff could track select metrics and keep records in a spreadsheet or database. However, as a data center becomes larger and more complex, this method increases risk and can lead to potentially catastrophic results. A complex data center requires many systems to monitor information with each added asset. As operations and technology scale and become more robust, the data center manager can quickly become overwhelmed and engulfed in analytics and massive waves of data.
Some of the potential dangers are fairly obvious. Relying continuously on staff resource availability may not be prudent. There’s risk of missed critical reporting intervals, transposed numbers, or even overlooked warning signs.
Other threats may be less visible. There are hidden dangers lurking in uncoordinated assets. Temperature alerts might actually be linked to improper power distribution. Discovery of a relocated or missing asset might start with a security breach. Reliance of contractors to support the data center can introduce new security risks. Under an individualistic monitoring approach, an isolated alert might not quickly and easily produce the root of the problem and could take hours to solve, leaving company assets and revenue at serious risk.
Fortunately, new developments in data center monitoring and asset tracking now allow for real-time assessment of assets and metrics through a single, unified system. DCIM solutions offer the ability to seamlessly connect hardware with software systems, and provide managers with a holistic view of the data center’s performance and health.
There are many advantages to implementing a turn-key, integrated DCIM solution. What follows are the most significant benefits of a seamless data and asset management system.
24/7 monitoring with automated intelligent alerts. Modern data centers require around-the-clock surveillance. Demands on data occur at all hours of the day and night as companies serve global customers that expect constant access. Rather than solely depending upon staff to monitor critical functions and assets, an integrated DCIM solution can gather data at specified intervals and disperse it for real insight into the data center’s function. Ongoing, consistent reporting provides data center managers with the assurance that critical tasks are being performed, as specified, without interruption. Intelligent alerts also eliminate errant or unnecessary “false alarms” that are sometimes sent across multiple monitoring systems.
Better allocation of physical assets and staff. An integrated, turnkey DCIM solution can significantly improve operational efficiency across the entire data center, allowing for more productive use of physical assets and staff resources. For the data center manager, DCIM provides a holistic view of key performance indicators. With the right information, the manager can decide how to integrate resources and where improvements can be made.
In addition to properly allocating physical assets, forward-thinking managers also can evaluate staff resources and operational efficiencies. With intelligent monitoring, employees are able to spend less time analyzing data; and, instead, focus on more important tasks, such as maintaining equipment; delivering more predictive analysis to prevent potential failures, and providing additional internal services.
Reduced risk from proactive monitoring. Data center managers know there are many risks associated with operating a complex facility. The more elements and assets present, the greater the potential for environmental, energy and security interruptions. Rather than reacting after a problem has occurred, the right DCIM solution allows for proactive problem solving and prevention.
Across the world, data has become highly regulated, forcing data centers to comply with strict legal requirements for security. DCIM solutions allow for greater security across various features, balancing necessary access of appropriate staff with protection of sensitive information.
Scalability as the data center evolves. What a data center requires in year one likely will differ from its needs in year three. A phased approach can provide for immediate results, while allowing the time needed to research and add more sophisticated functionality.
When looking for a DCIM provider, it is important to identify a solution that integrates with the existing infrastructure and synchs with current assets. The right provider will work alongside your data center team to:
- Identify current challenges and business objectives
- Evaluate tools and processes in place, and assess how they can be integrated with DCIM
- Determine the most valuable solution for today with the ability to scale for future needs
- Ensure long-term value – from system design and installation to ongoing maintenance and consultation
An integrated DCIM solution will not yield a productive data center on its own. Like hiring the right staff, an organization must find the proper fit for its processes and philosophies. To develop a forward-thinking performance management and response strategy, an organization must determine the right DCIM solution – one that’s designed to meet its needs today and scale for the vision of tomorrow.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:00p |
How Cloud Has Changed the Data Center Architect Cloud computing has fundamentally changed the way we deploy applications, control users, and deliver resources; and our ability to interconnect data centers today has allowed organizations of all sizes to be more agile and cost-effective.
As this cloud evolution continues , it’s critical to understand that beyond all of these future technologies sits the data center. Your underlying hardware resource architecture is meant to support a truly expansive array of solutions that has allowed us to far surpass the point of one service per one server.
Today, physical buildings still stand, physical servers need to be managed, and resources must be controlled. However, sweeping changes have taken place. One of the most critical pieces of the modern data center is the data center architect. This is the individual who must clearly understand the requirements of your entire infrastructure. Considerations around power, cooling, location, available utilities, and even pricing all fall under the realm of the data center architect. Now, these professionals must take a new look at the architecture that they work so hard to support. Through disaster recovery, new kinds of applications, and constant demands from users, cloud is forcing data center architects to evolve and even change some older schools of thought.
When it comes to changes in cloud and data center evolution, the trends speak for themselves. The use of cloud computing is growing, and by 2016 it will become the bulk of new IT spend, according to Gartner. This will be the year when the private cloud begins to give way to the hybrid cloud, and nearly half of large enterprises will have mixed deployments by the end of 2017. Furthermore, Cisco reports that by 2018, more than three quarters (78 percent) of workloads will be processed by cloud data centers. And, by 2018, 59 percent of the total cloud workloads will be Software-as-a-Service (SaaS) workloads.
With growth in cloud, it’ll be very important for the data center architect to really think outside the box. Now, let’s look at a few ways that cloud has changed the data center architect.
- Data center design and convergence. Simply put, there have been big changes in physical architecture design. New kinds of servers are being implemented in more efficient rack systems. Furthermore, considerations around new converged infrastructure have given data center architects new ways to create their underlying ecosystem. There are few more realities here to consider. There are more conversations today around commodity technologies and even more ways to effectively deploy a data center environment. Moving forward, data center architects will need to look at a variety of design options concerning their data center model. Furthermore, it’s important to understand how those underlying resources work with and extend into the cloud.
- Evolution in power and cooling. Powerful fanwall technologies, new kinds of “free cooling” concepts, and evolved air flow management techniques have all impacted how the data center performs today. As a result, organizations are looking closer at hydro-electric power options as well as more effective ways to get their PUE down. For the data center architect it’s critical to understand that cloud computing has placed even more reliance on modern data centers, making organizations are even more dependent on them. Furthermore, data center architects must understand how cloud technologies have changed densities, virtualization values, and the underlying hardware supporting all of it. Remember, as you’re working with more “converged” systems and better multi-tenant platforms, power and cooling demands will very much need to evolve and remain agile.
- New applications and workloads. Data center architects must understand what they’re actually hosting. They don’t have to be experts in application delivery or hosted workloads, but they need to understand how hypervisors, applications, and virtual resources all interact with the underlying data center model. Why? This will help them make better decisions around future data center technologies that revolve around physical design, cooling, power, and even rack architecture. Furthermore, by understanding the tie between cloud, your applications, and the data center, data center architects can evolve into cloud architects and beyond. Having this additional skillset increases their value as an asset and can certainly help from a career perspective.
- Uptime, disaster recovery, and business continuity. This one is huge. The new level of demand surrounding data center resources and the level of reliance on data center technologies is forcing architects to ensure optimal uptime. Cloud computing has made a big impact on the resiliency of the modern data center by helping extend complex resources over vast distances. The data center architect must understand what happens during a disaster event. New kinds of DCIM tools create visibility spanning multiple data center points and allow you to see how resources are being utilized. New methodologies around global server load balancing allow users to be dynamically redirected to the data center with available resources. Bottom line: There is a lot more automation, orchestration, and intelligence built into the modern data center to help support the cloud. Today, data center architects must be aware of those kinds of tools and how they help extend their data center infrastructure.
I recall speaking at the 2012 Uptime Symposium in Santa Clara where I focused on how new technologies like cloud and virtualization are drawing new kinds of data center road maps. It was a room full of data center engineers and architects eager to hear how cloud computing was going to impact their data center. Over the course of just three years, we’ve seen global data center IP traffic double from 2.6 zettabytes/year to more than 5.2 ZB/year today. And trends indicate continued growth.

The message then was to be aware of what’s happening in your market, the industry, and with end users. Today, the idea is much of the same. Data center architects must know that they are now the foundational pieces of cloud computing. Their roles within the modern enterprise help not only create data center efficiencies but also direct organizational competitive advantages. | | 5:25p |
Data Center Power Purchasing: Don’t Be a Price Taker Most data center operators make power buying decisions 30 to 90 days before their current power contract expires, and that’s exactly the wrong thing to do, according to Cara Canovas, sales director at Noble Americas Energy Solutions. That’s the best way to become what she called a “price taker,” meaning you’ll be forced to agree to whatever the energy provider says the market price is at the moment, since you won’t have time to devise an intelligent power purchase strategy.
Canovas explained the fundamentals of data center power buying at this week’s Data Center World conference in Las Vegas. Data center operators don’t have to be price takers, but it takes a fairly complex series of decisions to avoid and requires some long-term planning.
But it’s worth taking the time. Data center power costs typically comprise 28 percent to 30 percent of the total cost of running the facility, she said. A sound power-purchase strategy can give you a competitive edge and protect you from the risk of swings in commodity prices.
One caveat: you can only do this in total of 17 states in the U.S. at the moment, according to Canovas. Those are states with deregulated power markets. Regulations dictate energy prices in the rest of the country, and those are the prices you’re stuck with if you have facilities in one of those states. You can see a map of deregulated states here.
It’s All About Risk Management
Buying power is a lot like buying insurance, Canovas said. You don’t want to overpay, but you also don’t want to end up under-insured, which may at some point lead to massive costs. It is possible for your power prices to double year over year, if you haven’t built in any hedging mechanisms into your strategy.
Like with insurance, power-buying decisions rely a lot on understanding and managing risk. “It’s really knowing your company’s risk profile, and how much risk they want to take on,” she said.
According to Canovas, a successful power-purchasing plan has to answer these four questions about risk:
- Are we protected from a potential run-up in the market?
- Are we positioned to benefit from price dips in the market?
- Are we able to adapt to changes in consumption?
- What term should we buy and why?
The Basics
No purchasing plan can be made without understanding fundamentals of what makes up the price of power. The basic equation is heat rate x price of fuel + other charges, Canovas said.
Heat rate is a measure of efficiency of a power plant, and the lower the rate the better the efficiency. Wood’s heat rate, for example, is 15; coal’s is 10. Natural-gas and nuclear plants are most efficient: eight and six, respectively. Nuclear is “pretty darn efficient,” she said, but it has gone out of fashion in the U.S.
The price of fuel that goes into the consumer’s price calculation is based on the fuel prices on the commodity futures exchange and varies based on trading location.
Other costs include things like line losses, ancillary charges (costs to move power around), and independent system operator fees. About 85 percent of the power price is power itself, and the rest consists of the other costs, Canovas said.
Timing is Key
With all of this in mind, the price of power ultimately depends on location, term of use, and time of use.
Time of purchase is extremely important. On one chart Canovas showed, the difference between buying 10 megawatts in one location in January 2014 and buying it in the summer months was close to $1 million. If you wait until the last minute to renew your power purchase contract, you won’t have the luxury of deferring the purchase by a few months.
Power providers usually have a curve of historical power prices and future projections. The rates they offer to customers are usually based on that curve.
Each provider’s curve is proprietary information, so a customer that doesn’t want to be a price taker needs to be aware of the variables that make up that curve.
The bottom line is sound data center power purchasing is a complex decision-making process that can make a huge impact on the company’s operational expenses. It is much better to have some control over your power costs by taking the time to go through that process than rolling the dice and being a price taker. | | 6:26p |
Build? Buy? Another Perspective on Data Center Ownership This post originally ran on our sister site MSPmentor
By Jessica Davis, MSPmentor
Build? Buy? Host? It’s not a new debate for managed service providers (MSPs) and IT service providers. MSPmentor research has found most providers have opted out of running their own data centers, with the exception of very large service providers. What’s more, many MSPs say if they were starting over today they’d start as a born-in-the-cloud company.
For another perspective on this question, MSPmentor recently caught up with executives from Venyu, a company known for its data centers, but also a provider of cloud computing, managed hosting, and other services. And guess what? They pretty much agree with what we’ve found. Here’s what they told us.
The Trouble With Hardware Ownership
“A lot of organizations are realizing that it’s a detriment to own the hardware,” Brian Vandegrift, EVP of sales and innovation told MSPmentor. “It requires them to be making decisions about maintenance cycles, doing bakoffs, making buying decisions. All of that pulls IT, and MSPs, away from core mission of their business.”
Maybe that’s why Venyu saw an uptick in business from MSPs — 30 percent year-over-year growth — last year, Vandegrift told me.
“We are their data center. They have their own cloud monitoring in our data center facilities. We also have our own cloud services,” he said. Venyu’s biggest competitor is customers trying to own the infrastructure themselves.
TCO
In that case “it always comes back to a total cost of ownership discussion,” he told me. “What is the best use of their talent that they have on staff?”
This shift to outsourced infrastructure begged another question for me. What does it mean for vendor sales and reseller sales that have typically focused on enterprise customers?
“You are finally seeing a lot of movement from the big hardware providers,” William Sellers, senior innovation engineer at Venyu told me. “They are all working more closely with [data center providers]…They are letting their sales force embrace the cloud instead of fighting it.
Vendor Response
“A lot of people just don’t want to own the hardware anymore. I also see them developing products for smaller and smaller customers because now they are seeing people who don’t want to buy the entry level hardware will just do the cloud.”
That shift is driving the push to hyperconverged infrastructure, Sellers told me, as big hardware vendors are looking to consolidate compute, networking and storage in a single box and sell it at a lower price to smaller buyers.
Advice for MSPs Shopping for Cloud Providers
For MSPs putting their services in the cloud, Venyu has the following advice: look at the service level agreements and terms of service
“Not all clouds are created equal,” Sellers said.
“So few of our customers dig down to the terms of service and the SLAs that are being provided,” he said. “You need a robust facility with redundancy in place.”
This first ran at: http://mspmentor.net/cloud-computing/041515/build-buy-another-perspective-data-center-ownership | | 7:00p |
Huawei Plans Public Cloud Launch in China, Investment in 5G Research 
This article originally appeared at The WHIR
Huawei, a global information and communications technology vendor, will spend $950 million to build it’s presence and research 5G technology in an effort to improve connectivity. Huawei is also launching its public cloud in China in July, said executive Eric Xu at the company’s annual global analyst summit.
The public cloud computing market in China has maintained growth rates of over 40 percent, according to a report by tech research firm IDC, making it a logical choice for expansion, at least within the Chinese market. “Huawei has run into difficulties in its plans for global expansion in the past because of concerns in some countries, particularly the US, over whether the company has links to the Chinese government and could be a security risk,” according to the South China Morning Post.
Over half of Huawei’s investment will be spent on research into 5G technology. Chief marketing officer Yan Chaobin said at the summit that they plan to introduce commercial 5G networks by 2020.
The company released its 2015 Global Connectivity Index on Tuesday as well. This report benchmarked 50 economies in terms of connectivity, ICT usage, and digital transformation. With China ranking 23rd, it makes sense that Huawei would want to focus attention in that market and especially on improving the existing network. The Asia Cloud Computing Association released a report in March stating that future growth in the APAC region is highly dependent on better, more stable infrastructure.
Alipay recently reported tremendous growth in mobile ecommerce in China and a Gartner report expects overall IT spend that grew 14 percent in 2012 to continue. Although this area represents an area ripe for growth in the internet services industry, service providers will need to be wary of legislation that requires companies to share source code with the Chinese government.
Although the proposed legislation is currently on hold, other revelations that may be reasons for pause continue to arise. For example, it recently came to light that China has the capability to target IP addresses with malware and continues it’s censorship initiative with DDoS attacks on GitHub.
This first ran at http://www.thewhir.com/web-hosting-news/huawei-plans-public-cloud-launch-china-investment-5g-research | | 11:03p |
AFCOM Names Data Center Manager of the Year PARADISE, Nev. – AFCOM named Michael Cunningham, of the University of Texas at Austin, Data Center Manager of the Year. The organization of data center professionals announced the award Tuesday at its 35th annual Data Center World Spring conference in Las Vegas.
Cunningham is credited with shoring up the data center facility and, more importantly, the data center culture at the university. The other two finalists were Brian Smith, director of critical facilities at Cerner Corp., and Bryon Miller, senior vice president of operations at Fortrust.
Cunningham first joined UT five years ago and tasked with building a $32 million dollar Tier III data center and developing a professional data center culture at the institution from scratch. The data center serves more than 50,000 students and 20,000 faculty and staff.
Before he joined, the university’s IT infrastructure was a piecemeal collection of legacy hardware and software across three data centers. The infrastructure was a reflection of the culture and vice versa.
As director of university data centers, one of the first steps was building a proper culture. A good facility means little without good practices to accompany it. The team was “just trying to keep the data center running,” Cunningham said.
Procedures were kept in a three-ring binder augmented with sticky notes and hand-written updates. Organizational knowledge passed through word of mouth, and there was no service catalog. There were no standard practices and very poor documentation — something Cunningham he had to turn around.
They developed some new positions and made some critical hires. The 23-person data center staff now includes a critical systems team and project manager, along with console operations, customer liaison, and Tier I and Tier II support, among other functions.
The philosophy behind his approach was that competence under pressure is built by repetition when the stakes are low. This way, staff members are confident and know how to react if an issue does arise.
“Most failures in the data center are caused by human error,” Cunningham said. Therefore, he advises developing robust processes and then training the team to follow them. “We go back and pull small events, so staff can practice resolving them informally in a non-threatening environment.”
Prior to his current role, this year’s award winner spent 10 years at Dell and close to 20 in multiple management roles at IBM. At Dell, he expanded the data center portfolio worldwide.
“Today’s data center managers have responsibility beyond the data center, too,” said Cunningham. “They also must manage the steadily increasing automation from customer-facing services that rely on the data center to operate efficiently and with high availability.”
Karen Riccio and Gail Dutton contributed to this report. |
|