Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, March 9th, 2016
| Time |
Event |
| 1:00p |
Applying the Scientific Method in Data Center Management Data center management isn’t easy. Computing deployments change daily, airflows are complicated, misplaced incentives cause behaviors that are at odds with growing company profits, and most enterprise data centers lag far behind their cloud-based peers in utilization and total cost of ownership.
One reason why big inefficiencies persist in enterprise data centers is inattention to what I call the three pillars of modern data center management: tracking (measurement and inventory control), developing good procedures, and understanding physical principles and engineering constraints.
Another is that senior management is often unaware of the scope of these problems. For example, a recent study I conducted in collaboration with Anthesis and TSO Logic showed that 30 percent of servers included in our data set were comatose: using electricity but delivering no useful information services. The result is tens of billions of dollars of wasted capital in enterprise data centers around the world, a result that should alarm any C-level executive. But little progress has been made on comatose servers since the problem first surfaced years ago as the target of the Uptime Institute’s server roundup.
Read more: $30B Worth of Idle Servers Sit in Data Centers
One antidote to these problems is to bring the scientific method to data center management. That means creating hypotheses, experimenting to test them, and changing operational strategies accordingly, in an endless cycle of continuous improvement. Doing so isn’t always easy in the data center, because deploying equipment is expensive, and experimentation can be risky.
Is there a way to experiment at low risk and modest cost in data centers? Why yes, there is. As I’ve discussed elsewhere, calibrated models of the data center can be used to test the effects of different software deployments on airflow, temperatures, reliability, electricity use, and data center capacity. In fact, using such models is the only accurate way to assess the effects of potential changes in data center configuration on the things operators care about, because the systems are so complex.
Sign up for Jonathan Koomey’s online course, Modernizing Enterprise Data Centers for Fun and Profit. More details below.
Recently, scientists at the State University of New York at Binghamton created a calibrated model of a 41-rack data center to test how accurately one type of software (6SigmaDC) could predict temperatures in that facility and to create a test bed for future experiments. The scientists can configure the data center easily, without fear of disrupting mission critical operations, because the setup is solely for testing. They can also run different workloads to see how those might affect energy use or reliability in the facility.
Read more: Three Ways to Get a Better Data Center Model
Most enterprise data centers don’t have such flexibility, but they can cordon off sections of their facility as a test bed, as long as they have sufficient scale. For most enterprises, such direct experimentation is impractical. What almost all of them can do is create a calibrated model of their facility and run the experiments in software.
What the Binghamton work shows is that experimenting in code is cheaper, easier, and less risky than deploying physical hardware, and just about as accurate (as long as the model is properly calibrated). In their initial test setup, they reliably predicted temperatures with just a couple of outliers for each rack, and those results could no doubt be improved with further calibration. They were able to identify the physical reasons for the differences between modeling results and measurements, and once identified, the path to a better and more accurate model is clear.
We need more testing labs of this kind, applied to all modeling software used in data center management, to assess accuracy and improve best practices, but the high-level lesson is clear: enterprise data centers should use software to improve their operational performance, and the Binghamton work shows the way forward. IT is transforming the rest of the economy, why not use it to transform IT itself?
About the author: Jonathan Koomey is a Research Fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University and is one of the leading international experts on the energy use and economics of data centers.
Sign up here for his upcoming online course, called Modernizing Enterprise Data Centers for Fun and Profit, which is starting May 2.
The course teaches you how to turn your data centers into cost-reducing profit centers. It provides a road map for businesses to improve the business performance of information technology (IT) assets, drawing upon real-world experiences from industry-leading companies like eBay and Google. For firms just beginning this journey, it describes concrete steps to get started down the path of higher efficiency, improved business agility, and increased profits from IT. | | 4:00p |
Workforce Trends: Digital Business and IT Service Management Robin Purohit is President of Service Support at BMC.
Employees’ expectations are changing rapidly about where we work, how we get work done, and when it happens. Workforces are increasingly global and distributed, and individuals want to be productive at all times, anywhere they happen to be.
Meanwhile, the ability to work from anywhere now has employers expecting employees to work productively during non-traditional hours at various locations. This dynamic has changed the concept of the traditional “office” and put tremendous pressure on IT service management (ITSM) to maintain 24/7 operations and support the new distributed digital enterprise.
Likewise, mobility is replacing the traditional desktop experience. A Kensington Productivity Survey found that more than 60 percent of professionals use multiple devices at work at least half of the time, and 90 percent believe integrating devices would enhance productivity. Further, as millennials make up a larger percentage of the labor force, they increasingly expect a consumer-like experience at work akin to the smart, user-friendly technology they use at home.
In fact, 2016 may well be the year when the “workplace” will no longer be a single place at all, as enterprises accelerate the shift to a more consumer-like computing environment, enabling employees to choose the productivity tools and technology they want to use.
Companies that don’t modernize their IT service desks to adequately support their new digital business will face dwindling prospects and could well find themselves in the company of the 75 percent of S&P 500 companies that will be replaced by 2027. Perhaps the most dangerous consequence will be the difficulty of attracting and retaining top talent if systems don’t empower them to be productive and successful.
ITSM/Digital Business Enablers
Here are four key ways companies embracing digital are adapting and transitioning to meet the requirements of their workforces:
Mobile-first. Digital natives are becoming a larger part of our workforce each day. Each new entrant probably can’t remember a world without mobile phones, and the expectation is that the work experience will mimic the consumer experience they’re used to.
To work as efficiently and productively as possible, these mobile employees need flexibility to work from anywhere on multiple devices with a seamless user experience. This includes the ability to access the service desk solution from anywhere using mobile devices. Done the right way, a mobile-first approach can also offer unparalleled convenience and productivity to IT service support teams, along with increased customer satisfaction.
A persona-based approach. IT is becoming a curator of apps, devices and content based on personas. A persona-based strategy empowers everyone in the organization by giving individuals easy access to appropriate tools and streamlined service delivery based on their roles, such as a “developer” or “sales rep.” This approach streamlines the user experience and promotes user understanding and adoption as a means to increase first-call resolution rates and customer satisfaction.
Automation – moving at the speed of expectations. IT automation has always been important, but trouble tickets continue to be a burden on the service desk. Digital businesses are taking an increasingly strategic approach to automation that responds quickly to changing business requirements.
Automation in the form of user self-service, for example, reduces IT staff workloads while improving employee productivity and satisfaction. Reducing the chance of human error and optimizing every step of a process also radically reduces security and compliance risks.
Empowering IT service management to support the digital business is enabling companies such as Vodafone to provide self-service access to the answers and tools employees need based on their locations, roles and preferences. Rather than submitting a trouble ticket into a long queue or waiting on hold, the information they need is available through a browser or a mobile app, easing resolution and reducing the burden on IT staff. In addition, by solving their own problems quickly and easily, employees can get back to work promptly to serve customers.
Crowdsourcing – asking employees how they want to work.Many companies today are using crowdsourcing to enable employees to help IT map and manage the IT environment. Using crowdsourcing, users add assets to location-aware maps, while IT determines what information needs to be included and controls who can add what information to which maps. Employees can also report outages, providing IT with a real-time flow of asset updates. By building a repository of crowdsourced problems and resolutions, IT empowers employees to find answers to most of their questions with little effort.
Businesses need to think differently about their workforces. The modern digital workforce is about fast, effective, and elegant ways of working anytime, anywhere with access to the applications and services needed to get work done and deliver higher productivity. It involves new modes and methods of working, not just about making offices more mobile or adding digital services to the workplace. By rethinking digital capabilities, businesses will be able to raise the bar on how employees can engage with customers, drive operational efficiencies and boost overall productivity by adopting these best practices.
Becoming a digital enterprise isn’t a plan for the future; it’s a transformation that CIOs need to be making now. Organizations that are not digitally empowered will soon be unable to compete, and that empowerment requires an ITSM team that is itself strategically enabled to support, optimize and grow the digital business.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:31p |
Equinix to Use Facebook Data Center Switches for Interconnection Platform Data center services giant Equinix is planning to deploy Wedge networking switches Facebook designed for its own data centers and open sourced through its open source hardware community, the Open Compute Project.
Deployment of the switches is the first step in what Equinix expects will be an “open source” platform for software and hardware inside its data centers around the world. It will be used to enable interconnection between the data center provider’s customers and partners. Equinix said it will collaborate with Facebook to build it.
Another piece of open source technology that will be part of the platform is the Data Center Operating System by Mesosphere, which has open source Apache Mesos at its core. DCOS aggregates disparate compute resources in a data center, turning them into virtual pools, making it easier for developers to build software without worrying about the types of hardware it will run on.
More and more software will have to be able to run on massive-scale infrastructure that often spans the globe, and technologies that have come out of OCP have all been developed originally to support some of the largest-scale infrastructures, data centers operated by the likes of Facebook and Microsoft, which have been designed for scale from the ground up.
Earlier this year, Equinix joined a newly formed initiative within OCP, the Open Telco Project, launched by a group of major telecommunications firms, including AT&T, Verizon, and Deutsche Telekom, among others. Today’s announcement builds on that move.
Equinix doesn’t want to become a technology vendor, but its business model relies to a great extent on interconnection. Joining the OCP Telco project is a way to future-proof that business model, Equinix CTO Ihab Tarazi told us earlier.
All the major telcos are the data center provider’s customers, and they connect to each other, to cloud providers, and to customers in Equinix data centers using IP protocol. As they make key decisions about the next-generation technologies they will use in the future, Equinix wants to be part of the conversation, Tarazi explained. | | 8:26p |
Google Contributes 48V Shallow Data Center Rack to OCP Google has joined the open source data center and hardware design community created and led by Facebook, called the Open Compute Project, the company announced Wednesday.
OCP, through which Facebook opened up a lot of the specs and designs for its custom data center hardware – compute, storage, and networking – as well as data center racks, power, and cooling systems, has become a hub where companies that build hyperscale data center infrastructure collaborate with vendors and with each other to solve common technology problems.
Microsoft joined OCP in 2014, contributing an OCP-based cloud server design, and Apple joined last year. While Google has open sourced a lot of innovative infrastructure technology, it has resisted going all-in with OCP until now.
And Google is joining with a contribution of its own. The company said it will contribute a spec for a data center rack with 48V power distribution and a new form factor that will enable OCP racks to fit into Google data centers.
One of the early glimpses on technology in Google data centers was a 2006 paper the company published, describing its 12V power infrastructure design for data center racks. Three years later, however, the company started evaluating 48V power distribution, because it promised to provide better efficiency and performance for more power-hungry high-performance computing systems, powered by GPUs and high-power CPUs.
Facebook’s Open Racks are backed up by UPS systems with battery cabinets that provide 48V DC power. The Open Rack spec recommends 12V AC as primary power, however.
Google’s rack supports severs that use 48V power and 48V rack-level UPS systems. “As the industry’s working to solve these same problems and dealing with higher-power workloads, such as GPUs for machine learning, it makes sense to standardize this new design by working with OCP,” John Zipfel, technical program manager at Google, wrote in a blog post.
The system supplies 48V power all the way to the motherboard. There is only one conversion step, from DC to DC, stepping down the voltage for each individual component, such as CPU, memory, or disk, depending on what that component needs.
This reduction of conversion steps has resulted in a 30 percent improvement in energy efficiency, Urs Hölzle, senior VP of technical infrastructure at Google, said from stage at the Open Compute Summit in San Jose, California, Wednesday, where he announced the contribution and the addition of Google to the list of OCP members.
“It’s something that we’ve deployed at scale, so we have several years of experience now,” he said.
Google is currently working with Facebook to create a rack standard that suppliers could use, in hopes of pushing it into mass production. If Google’s contribution is accepted by OCP and the standard is developed, Google will deploy racks based on that standard in its data centers, and there are indications that Facebook would do the same, Hölzle said.
“We’re joining OCP for that purpose,” he said, explaining that there was no good reason to have multiple versions of a 48V rack.
Another major difference between Google’s rack and the current Open Rack design is the form factor. Google’s racks are shallower, and its data centers are designed in a way that cannot support the full-depth Open Rack.
“This is something that we need,” Hölzle said. “Our rows aren’t wide enough.” The shallow form factor should work well with most modern motherboard designs, he said.
The racks are backwards compatible to a certain extent. Google still does deploy 12V sever trays into 48V racks from time to time. It requires an extra DC to DC converter to step the voltage down at the tray level and costs extra, so the company tries to avoid it when it can. | | 11:22p |
IT Innovators: Improving Data Center Efficiency with Green IT Strategies  By WindowsITPro
In advance of his talk at the Green Data Center Conference, vice president of market development at Emerson Network Power, Jack Pouchet, sat down for a Q&A with IT Innovators about the environmental impact of cloud technologies. A board member of The Green Grid Association—a global consortium including companies, government agencies, educational institutions and individuals dedicated to driving effective and accountable resource efficiency across technology ecosystems—Pouchet takes the pulse of the IT industry and identifies opportunities for innovative solutions that address its dynamic needs.
What is one of the biggest issues facing the IT industry?
Cyber security is top of mind with all of our customers, and all of their customers. It certainly is top of mind with the government agencies, and it’s changing how we operate the existing infrastructure we have, the existing buildings we have. It’s changing how we update the existing facilities and how we plan for the cloud; do we go to a hybrid cloud, do we go to a public/private cloud, or do we build our own data center and go to the cloud?
What advice would you give IT professionals considering private or hybrid cloud?
Ask the questions: “What happens when…What happens to us when whoever we went to goes down? What happens to us when whoever we went to gets hacked?” Those are the things you need to be prepared for and there’s certainly a ton of providers out there that can put in place the security, the data integrity. There are HIPAA-approved cloud providers out there and HIPAA has some pretty strong frameworks around it.
What are some pitfalls you’d warn against?
Think through regional events, a regional power outage or widespread power outage, any sort of natural disaster. What’s your recovery plan? What’s your resiliency plan? What if there’s a major network outage of some kind that takes out the fiber carrier? You have to have plans for that sort of thing when you’re going to the cloud.
How has cloud computing shaped the data centers of today and in the future?
What companies are saying is, “We have a fungible asset in that we can expand into the cloud. We can take non-critical workloads and move them so as IT demands wax and wane, we can go take advantage of what’s out there.” That allows them to make more intelligent decisions on the core building they already have. If it’s more than 3-5 years old, there’s probably a lot of opportunity to make it better. The cloud is allowing people to have a little breathing room. Take advantage of that to make your facility better, more robust, more efficient.
What are the cost savings associated with energy-efficient data center deployments?
For the vast majority of our clients, the Energy Logic strategies net out to a two to three year payback. We encourage data center operators and owners to hire trained professionals when evaluating their existing facilities and to help develop sound, reliable energy-saving solutions that will not jeopardize their production environment, will produce savings and, when properly executed, will lead to a more stable, robust and resilient IT environment.
What options should IT professionals consider to reduce environmental impact?
One thing to consider is the concept of server idle energy. One of the things we know in the IT world is that the vast majority of servers sit idle for a large part of time. We know from Jonathan Koomey’s recent study that up to 30 percent of servers are what we call “comatose,” meaning that there has not been a known workload applied to that device in the last 6 months. Guess what? While it’s sitting there idle, it’s using 40, maybe 50, percent of the energy the working ones are using. So if we did nothing else but just replaced servers as we migrated to ones that only use 10 percent or less energy when they’re idle the savings would be huge.
What are the most significant data center trends in energy that you are seeing?
Clearly what I see is the move to 100 percent renewable energy; going to renewable energy—either buying it or doing microgrids—and then looking at low water or waterless cooling technologies. In the data center world, I think the next thing that’s coming is that people will be held accountable for how much water they are using. How wet is the cloud?
What do you hope to accomplish at the Green Data Center Conference?
What the big players are doing in the marketplace, you can do as well. We’re asking you to take a look at what’s getting done and how we can parse out some of the innovation that has been developed in building what we call these “hyperscale” data centers and apply those principles and practices to what you’re doing. And along the way, speed up your time to execute your time to delivery, your time to deploy, whatever metrics that are important to your business. Those players have done some incredible innovation, some brilliant work, and you can benefit.
Christy Peters is a writer and communications consultant based in the San Francisco Bay Area. She holds a BS in journalism and her work covers a variety of technologies including semiconductors, search engines, consumer electronics, test and measurement, and IT software and services. If you have a story you would like profiled, please contact her at christina_peters@comcast.net.
The IT Innovators series of articles is underwritten by Microsoft, and is editorially independent.
This first ran at http://windowsitpro.com/it-innovators/it-innovators-improving-data-center-efficiency-green-it-strategies |
|