Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, June 21st, 2016

    Time Event
    12:00p
    How to Get Paid for Your Data Center Efficiency Project

    The incredible amount of energy needed to power data centers is well documented. Globally, data center energy use accounts for three percent of all electricity consumed, a figure that will continue to grow in the coming years. While fueling one of the backbones of our economy, this incredible power usage has resulted in staggering electricity bills and large amounts of pollution associated with producing that energy. To help combat this, utility companies in recent years have been offering incentive programs to data center owners and operators who are willing to make their facilities more energy efficient.

    While these incentives are incredibly beneficial to data centers, I’ve found many operators hesitant to take advantage of the opportunity for a variety of reasons, including not being sure where to start, fear of unknown project costs, and confusion on the different types of incentives that are available. These concerns are all very understandable, and I can share some knowledge to help clear up the confusion that surrounds utility incentive programs.

    Prescriptive Incentives

    The incentives offered to data centers by utility companies generally fall into two main categories: Prescriptive Incentives and Customized Incentives.

    Programs in the prescriptive category provide incentives or rebates that are paid as a fixed dollar amount for the replacement of an older technology with a new and more efficient version. For example, on a general facility level, lighting retrofits are often incentivized on a prescriptive basis. If you replace a metal halide fixture with a much more efficient T5 or LED fixture, utility companies will often pay a specific dollar amount for each fixture replaced. For data centers, the most common prescriptive incentive is for Variable Speed Drive (VSD) fan motors on CRAC or CRAH units. More efficient technology outside of IT equipment can also often qualify for prescriptive incentives, such as motors, pumps, and HVAC machinery. Upgrading some of the infrastructure to virtual servers can also qualify for prescriptive incentives.

    The paperwork required by utility companies to claim prescriptive incentives can be confusing. Different utilities typically have their own unique paperwork that must be completed correctly and in the right order to receive an incentive. In addition, each utility that offers prescriptive incentives has its own set of rules for the types of equipment that will qualify along with the amount paid for each item that is being upgraded. Additional barriers can include: pre-qualification applications, utility personnel site visits for verification before and after the more efficient equipment and/or upgrades have been installed, and verification that purchase orders were not placed prior to a data center’s engagement with the utility company.

    Customized Incentives

    Unlike fixed-value prescriptive incentives, incentives from customized programs are typically performance-based. Payment is tied to specific metrics, like the amount of kWh or kW saved through an efficiency project. Examples of customized incentive programs include: adding economization (or free cooling) to a chilled water system along with monitoring and controls, adding modern controls to CRACs and air handlers, and the utilization of a variety of airflow management solutions.

    Broadly speaking, qualification for customized incentive programs is much more involved than for prescriptive incentives. Customized incentives usually require that an engineering justification for the new proposed measure(s) is created and then reviewed and approved by the utility company (and sometimes a third-party engineering firm) before the project begins. This justification (often in the form of a study or report) almost always includes metering power for existing conditions, calculating the potential savings of a proposed measure or measures, and then conducting power metering after the measures have been installed. The pre-project metering and savings calculations are used to vet the potential savings of the project and an anticipated incentive is calculated and set aside for the customer for a given period of time. Ultimately, the incentive is usually paid based on the measured savings after the project is completed, which can be a barrier to data center owners and operators. Depending on the project, customized incentives often have the advantage of paying much higher amounts than prescriptive incentives, which is why they undergo more scrutiny. For this reason, approval of customized incentives can also take much longer than prescriptive incentives.

    Customized incentives also have to meet certain payback criteria. Usually, if a project has an ROI of less than a year, a customized incentive won’t be available because utility companies consider the fast payback so valuable that an incentive shouldn’t be required to motivate the data center to move forward with the project. Conversely, utilities also don’t usually incentivize projects that have long payback periods (which vary depending on the company, but a 10-year or longer ROI will almost never be approved) because they view that project as too expensive relative to the savings. Each utility company follows its own rules and regulations, and the payback periods and methods determining how a payback is calculated vary from company to company

    Incentive programs are usually based on an annual schedule by the utility companies. They typically begin the program year (which may or may not correspond with the calendar year) with a fixed amount of money to use to incentivize efficiency projects. Unfortunately, the entirety of those funds may be committed to projects in the first few months of the program year with no additional funds available until the next year. If a data center operator is considering an efficiency project, it is important for them to know whether incentive funds will be available before the project begins. Additionally, just because a given utility program has funds set aside for certain types of projects one year, there is no guarantee that the same level funding will be available the following year. There is a bright side though: because data centers have been slow to embrace industry-specific utility incentive programs, funding is rarely exhausted.

    The message coming from utility companies is clear: due to the immense power used by the growing data center industry both globally and locally, incentives are available to motivate owners and operators to lower their energy usage. Whether choosing to pursue a prescriptive or customized incentive, the addition of energy efficient solutions in data centers is more attractive (and necessary) than ever before.

    About the author: Tim Hirschenhofer is general manager at Future Resource Engineering.

    3:00p
    Data Centers Offer HBO’s Silicon Valley Much to Laugh About

    Besides the sheer comedic brilliance of its writers and actors, what makes the HBO series Silicon Valley great satire is the amount of effort its creators put into making sure everything on the show that has to do with technology is at least plausible. Silicon Valley loves Silicon Valley because it’s funny and because it gets both technology and business right.

    And if you’re shooting for accuracy in portraying the high-tech industry, there’s no way around what sits at the heart of every modern software product or service: the data center. There are numerous data center or data center-related scenes in Silicon Valley. Pied Piper, the file-compression startup the show revolves around, uses a home-baked garage data center at first but eventually needs to move to the cloud so it can scale better.

    It is the only TV show that manages to find humor in the data center. In fact, you would be hard-pressed to find another piece of popular culture that portrays the technology world’s very real everyday struggle of infrastructure scalability in such an accurate yet hilarious way.

    Here’s a selection of data center and data center-related scenes from HBO’s Silicon Valley, so judge for yourself and let us know what you think in the comments below:

    Gilfoyle: the Infrastructure Man

    Bertram Gilfoyle, or Gilfoyle, played by Martin Starr, is in charge of Pied Piper’s infrastructure. He is the startup’s data center man, handling system architecture, networking, and security.

    “While you were busy minoring in gender studies and singing acapella at Sarah Lawrence, I was getting root access to NSA servers. I was one click away from starting the second Iranian revolution.”

    Rack Space

    In this scene, Richard, Dinesh, and Gilfoyle get a tour of a data center where a future hardware appliance they really don’t want to be building will be hosted. The appliance, by the way, was inspired by SimpliVity’s OmniCube.

    The data center manager who is showing the trio around walks them from isle to isle, pointing to empty rack spaces where the appliance would go. All the racks and all the spaces are of course the same, but he insists on showing them all. Being shown the same thing over and over is an inescapable part of any data center tour, and unless you’re a data center geek, you will be yawning about 20 minutes in.

    “There are sixteen stairwells; which one would you like to see first?”

    See the video on YouTube

    Scaling the Garage Data Center

    Here we get a glimpse of Pied Piper’s data center Gilfoyle built out of tool shelves and milk crates in the garage of the startup’s incubator, also known as Erlich Bachman’s house. In this scene, Piped Piper is demonstrating a live video stream to showcase its compression technology and following a tweet of the stream’s link by the Filipino boxer Manny Pacquiao, it goes viral in the Philippines, putting pressure on Gilfoyle to scale the infrastructure’s capacity.

    His efforts, which include punching a hole through a wall for a shorter cable run and jamming the circuit breakers so they don’t flip, eventually lead to a fire. What happens between Pacquiao’s tweet and a pile of IT gear getting engulfed in flames is hilarious.

    “My servers can handle ten times the traffic, if they weren’t busy apologizing for your shit codebase.”

    See the video on YouTube

    The actual part where servers catch fire is here:

    See the video on YouTube

    3:30p
    Creating a Public Cloud Experience In-House

    David Linthicum is the Senior Vice President of Cloud Technology Partners.

    The cloud has revolutionized the way we build IT systems within enterprises. Indeed, enterprise IT’s goal since the inception of cloud computing has been to replicate the power of cloud computing within their own data centers.

    The trouble is that cloud computing systems were built net-new, which meant they could start from scratch and thus be more innovative with the use of cloud-based resources using the most modern technology and approaches available. Enterprises don’t have the same luxury. Decades of enterprise hardware and software purchases exist at different levels of maturation, and those structures must also support mission-critical systems in operations.

    However, things are changing. New technology now provides enterprises with the public cloud experience, which includes:

    • Elastic use of compute resources, such as storage and compute.
    • Metered resource charge-back, meaning you only pay for the resources you use.
    • Auto- and self-provisioning; you can spin up and spin down resources as you need them.
    • Tight integration with new approaches and technologies, such as DevOps and the Internet of Things (IoT).
    • Business agility, which is perhaps the most valuable aspect of using the clouds, means that you can quickly change applications and resources with almost no impact on operations.

    In this article, I’ll take you through the steps to leverage the value of public clouds on-premises. I’ll include a path to leverage cloud concepts in ways that you may not have known about, where available new technologies support the concept of software-defined data centers (SDDC).

    The Public Cloud Experience

    The data is overwhelming around the adoption of public clouds. By 2018, IDC forecasts that public cloud spending will more than double to $127.5 billion. This forecast is broken down as follows: $82.7 billion in SaaS spending, $24.6 billion for IaaS and $20.3 billion in PaaS expenditures. What caused the shift to public clouds? There are five primary strategic drivers:

    1. Purchasers believe the current cost of traditional enterprise software and infrastructure (storage and compute) is disproportionate to the value that it creates.
    2. In these budget-conscious times, there is intense pressure to reduce the cost of acquisition and maintenance of software and hardware solutions (the on-going support and maintenance of solutions can often be four times the original capital cost).
    3. Organizations strive to reduce risk, and want a far more tangible relationship between software and hardware benefits and costs.
    4. The drive for reduced risk demands a much greater predictability of the running costs of the organization’s software solutions.
    5. The value of solutions is no longer determined by the functionality available, but by the feelings and experiences of the users in the way that they use and interact with the solution. In fact, most organizations only use a small subset of the functions available in their software products.

    Tactical advantages are easier to define:

    • Cloud uses a pay-as-you-go model to access a variety of IT resources, and this allows enterprises to only consume the needed resources.
    • On-demand access to resources at a low cost allows leverage of resources when and where needed, at a fraction of the cost of owning hardware and software.
    • Resource elasticity meets varying demands, allowing expansion and contraction of resources as needed.
    • Colocation of computation and data enables large-scale data analytics. Systems that were once out of reach are now affordable.

    One of the primary drivers of cloud computing is the value of business agility. Business agility is the ability to make quick changes in a business to meet changing business needs. Examples would include the ability to add a new product line, expand into new markets, or provide customer visibility into product shipments.

    Public cloud computing provides business agility. The ability to provision and scale a system is built into the architecture of most public clouds. If there is a business need for a new system, it’s just a matter of provisioning the resources required from public cloud providers. This process is much quicker and easier than purchasing, configuring, and hosting your own hardware and software assets.

    The value of business agility really depends upon the type of business. Those in the healthcare and finance verticals obtain a great deal of value from agile platforms, such as cloud-based platforms. Verticals with relatively static business processes, such as many manufacturing organizations, may not realize as much value around the use of cloud computing.

    Moving forward, public clouds are becoming much more feature rich, with additional capabilities that meet or exceed what enterprises currently run in-house. This lead to public clouds being the desired platform of choice, although enterprises may be limited as to aspects of the public clouds that they can actually use.

    Building an On-premises Private Cloud

    So, what are the required capabilities of on-premises private clouds that will meet your needs? A few features to consider would be:

    • Converged compute, storage and networking with automation and management
    • Scale out on-demand with any size x86 servers
    • OpenStack standard APIs delivery for operational efficiency and simplicity
    • Technology for smart resource provisioning and allocation
    • Distributed architecture with self-healing for high availability

    What’s key about the items above is that they are all features that can be found in an SDDC as well as a private cloud. In essence, you can find an analog for public cloud technology within your own data centers, including the ability to provide elasticity, centralization, and operational efficiency.

    Moreover, these systems can provide the ability to auto- and self-provision compute and storage resources, allowing you to provision resources at the application level. The alternative is to try to predict capacities, and attempt to align need with hardware resources. It’s a guessing game that costs big money, since there is no way to tightly align the needs of production with hardware resources. However, using a private cloud within an SDDC, you’re able to provide a platform that’s able to better respond to the application and user needs.

    Finally, there is the ability to support widely distributed architectures that provide self-healing capabilities, such as working around a server or storage system that’s failing. Or, working around breaks in network services, as well as dealing with active/active data redundancy in support of business continuity.

    The first step is to understand your own requirements, and that means doing the up-front work. While your own requirements will differ a bit, the following, at the very least, should be understood.

    • Data properties and usage. Know where your data is, what it is, and how it’s leveraged within applications.
    • Storage properties and usage. How are files stored, when, where, and by whom? What applications exist on, or are using which storage services?
    • Security and governance services needed. What are the security systems in place, and how do they need to exist in the to-be architecture using a private cloud and SDDC? The same with governance services, such as service or API governance.
    • Application portfolio and profiles. What applications should and need to be moved to the new private clouds platform, and which ones should move first? Moreover, understand their links to data as well as how users access the applications.
    • Networking and other infrastructure needs. What is and will be the loads on the network, and how will the target private cloud deal with the loads? There are other things to consider as well, including power management, monitoring and management consoles, and other aspects that exist within your premises.

    Architectures and Solutions to Consider

    Once we have the requirements down, we can consider a number of meta-architectures as options including: Private, public, and hybrid/multi. Public has already been explored above. Private clouds provide you with more control and sometimes better efficiency as well, if the right technology is selected and leveraged in the right ways. In other words, you can replicate the public cloud user experience by using private cloud solutions.

    However, sometimes it makes sense to pair private clouds with public clouds, thus creating a hybrid or multi-cloud solution. These are more complex than just private or just public clouds, but do allow you to place different workloads on different clouds, depending upon what those workloads need to do. For instance, placing a big data system on a public cloud for cost efficiency due to storage needs, and having that big data system work in conjunction with systems that exist on private clouds.

    Of course, there are many public cloud success stories, including Netflix. They focused on delivery of a video streaming service that was more cost effective to run on a public cloud. Indeed, they are going so for the same reason that Groupon is running in their own data center. Netflix ran the numbers, and, for them, the public cloud was the most effective and efficient solution.

    What are the Must Haves When Looking for a Private Cloud?

    So, what are the must haves when looking for private clouds solutions? There are a few key concepts to consider.

    First, ease of use. The private cloud technology must serve those in operations, as well as application developers, and even application users. There are many dimensions to consider, including the ways that each person views the private cloud through their own set of interfaces.

    Second, cost efficiencies. How well does the private cloud do at minimizing costs? It’s important that we run the numbers and understand the cost expectations, in terms of what we’ll spend for the private cloud by resource, application, and user, as well as what efficiencies the private cloud will bring.

    When doing these calculations, make sure to figure in the value of agility, or the value of providing the ability to quickly change and expand business processes. While this value is difficult to determine, you should look at the strategic value of agility to the business. For example, what is the value of bringing a product to market in days, rather than months or sometimes years? It typically means millions to the bottom line.

    Call to Action

    So, what does all of this mean for your enterprise? The core message: The value of public cloud does not necessarily need to be delivered by a public cloud. Using SDDC and converged systems, as well as best-of-breed private cloud platforms, enterprises can provide a cost-effective hybrid cloud alternative to public clouds for enterprises.

    As cloud computing continues its growth in popularity, and public clouds become more powerful, the use of the public cloud resources will be contraindicated by many enterprises. For those companies that exist in this category, it’s good to know that they have powerful and effective options that will open the door to the public cloud experience.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:12p
    Intel Fights Record $1.2B Antitrust Fine at Top EU Court

    (Bloomberg) — Intel attacked the European Commission for being unfair in a probe that led to a record 1.06 billion-euro ($1.2 billion) fine.

    The key issue in the investigation was loyalty rebates to lower retail prices, Daniel Beard, a lawyer for Intel, told the European Union’s Court of Justice in Luxembourg on Tuesday. But the European Commission failed to analyze “all relevant circumstances” to see if the rebates shut out rivals, he said.

    The world’s biggest chipmaker is making a final attempt to overturn the penalty doled out in 2009 for unfairly squeezing out Advanced Micro Devices. No date for a ruling has been set.

    See also: AMD, the Best Bet in Superconductors, Looks to Defy History

    Two years ago, the EU General Court rejected Intel’s first appeal. That ruling was a timely boost to the Brussels-based European Commission, which is embroiled in lengthy probes of search engine giant Google and chip designer Qualcomm. Regulators say Google gave financial incentives to telecommunications operators and phone makers that install its search app. They also allege Qualcomm paid a smartphone and tablet manufacturer to mostly use its chips.

    Large Market Share

    The Intel case concerns whether a company with a very large market share “can pursue a commercial strategy, the focus of which is the marginalization or even the elimination of its only competitor,” the commission’s lawyer Nicholas Khan told the court Tuesday.

    The evidence shows that the rebates prevented computer makers from seeking out lower prices “that might have been available,” he said.

    The EU’s antitrust regulator in its decision said Intel had obstructed competition by giving rebates to computer makers from 2002 until 2005 on the condition that they buy at least 95 percent of chips for personal computers from Intel. Intel then imposed “restrictive conditions” for the remaining 5 percent, supplied by AMD, which struggled to overcome its rival’s hold on the market for PC processors, the EU said.

    The computer makers coaxed to not use AMD’s chips included Acer, Dell, Hewlett Packard Enterprise, Lenovo, and NEC, the commission said in 2009. The EU also said Intel made payments to electronics retailer Media Markt on the condition that it only sell Intel-based PCs. The EU also ordered Intel to stop using illegal rebates to thwart competitors, an instruction that Intel complained was unclear.

    10:17p
    Global Telia Outage Disrupts Popular Internet Services

    Monday’s glitchy internet performance, which reportedly affected a whole range of popular sites and services – from Amazon’s infrastructure cloud to Reddit and Facebook’s WhatsApp – has been blamed on problems with Telia Carrier, the backbone network operator arm of the Swedish telco TeliaSonera.

    It’s unclear what caused Telia’s global backbone to lose data packets traveling between five continents (North America, South America, Africa, Europe, and Asia). Some news reports have tied the outage to an error made by a Telia engineer without indicating the source of the information.

    Multiple Telia customers said on Twitter the outage was caused by human error.

    Packet loss on Telia’s backbone has been documented in detail by one of its major customers, CloudFlare, which operates a global content delivery network. This was a second major Telia outage CloudFlare had experienced in four days, and the CDN provider’s CEO took to social networks to vent his frustration over what he said was a 60-day period of subpar reliability.

    Here’s CloudFlare’s visualization of Monday’s high-packet-loss period on Telia’s global network:

    telia-outage-6-20-2016--1-

    Telia is one of the biggest global backbone operators. Its mesh of interconnected metro networks and PoPs is hosted in many data centers around the world, operated by a variety of data center providers, including Equinix, Digital Realty Trust and its subsidiary Telx, CyrusOne, Interxion.

    CloudFlare CEO, Matthew Prince, said on Twitter that Telia’s reliability over the last 60 days was unacceptable, and that CloudFlare would de-prioritize the carrier until it fixes its “systemic issues.” In a separate tweet, Prince said his company was spending “millions a year” with Telia.

    The Importance of Transparency

    That network and data center outages are unavoidable is the unfortunate reality for everybody who does business on the internet. All systems go down at some point in time, and while most customers recognize this as reality, service providers during outages are judged based on the speed of recovery and transparency. Prince and a representative with another Telia customer, provider of a web-based project management tool Basecamp, both said they were curious to see how transparent the carrier would be about the root cause of the outage.

    Telia has apologized and said it was working with customers directly to resolve problems the outage has caused but has not revealed the cause of the incident publicly.

    << Previous Day 2016/06/21
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org