Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, May 12th, 2016

    Time Event
    12:00p
    How to Create a Reliable DR Strategy: Best Practices

    Modern IT platforms are designed to handle more users than ever, but what happens when these systems become the primary access point for most, if not all, users? What happens when a critical system experiences a fault or goes down entirely?

    A survey by the Disaster Recovery Preparedness Council found two years ago that only 27 percent of companies received a passing grade for disaster readiness. The more we rely on data centers, the more costly data center outages become. A recent study by the Ponemon Institute and Emerson Network Power found that:

    • The cost of downtime has increased 38 percent since 2010.
    • Downtime costs for the most data center-dependent businesses are rising faster than average.
    • Maximum downtime costs increased 32 percent since 2013 and 81 percent since 2010.
    • Maximum downtime costs for 2016 are $2,409,991.
    • UPS system failure continues to be the number one cause of unplanned data center outages, accounting for one-quarter of all such events.
    • Cybercrime represents the fastest growing cause of data center outages, rising from 2 percent of outages in 2010 to 18 percent in 2013 to 22 percent in the latest study.

    With this in mind, what is your DR strategy? Are you ready for an emergency?

    DR Sizing and Planning

    Since every environment is unique, disaster recovery capacity planning may take different shapes and forms depending on the goals of the organization. However, the following four metrics are a good starting point:

    • User requirements. By establishing user count and future growth, you’ll be able to tell how much storage, RAM, and CPU resources are necessary. In DR planning, this number helps you align resources to the number of users you must support to remain operational.
    • Apps, desktops, workloads, and user resources. By knowing the workload type, we are able to size and plan more effectively. For DR purposes, what will keep your users most productive? Is it a virtual application? Or, is it a full desktop? Maybe it’s a cloud-based DR solution offering Office 365. Know your workload in a DR scenario and how it will be delivered to your users.
    • WAN link considerations. Bandwidth must be considered in designing a DR environment. Furthermore, building in redundancy is critical. Do you have various Ethernet services? Are you ready for a primary link to fail? Make sure to plan around this step as well.
    • Planning around the data center. There are many kinds of data center technologies to work with. For the most part, data centers provided as a service are very flexible, offering management options for almost all levels. Resources must be managed and distributed appropriately – otherwise, an organization may be wasting money on misallocated workloads. In a DR scenario, where is the data center located? Do you own it? Make sure your secondary site is well-planned and ready for an emergency.
    • Content and resource delivery methodologies. How is the workload delivered to the end user by a DR environment? What speeds are optimal? Where will certain types of content be rendered? Do we need to make adjustments to compensate for latency? Does the user have easy access to the app or resource? These points must be worked out for a solid DR plan.

    DR Documentation

    With DR planning comes the important task of documentation. The reality is that this step is often either forgotten or put off until the last minute. Poor documentation can lead to a very bad DR experience. Administrators must not only create current distributed environment documentation, but they must also create what is known as a “living DR workbook.”

    Consider the following when working on a DR plan and documentation:

    • This workbook is a truly all-encompassing document, which will evolve as the environment changes.
    • The document will reflect each IT team and their direct responsibilities should an event occur.
    • This document will also spell out different scenarios for different departments.
    • There will be remediation steps for each team and each person responsible will have a task when an outage or pre-designated event occurs.
    • Managers must continuously present this workbook to their staff and ensure that they understand their roles and functions should an event happen.

    And don’t let these documents get stale. Update them and ensure that DR plans are set in place and kept fresh.

    DR Testing, Maintenance, and Best Practices

    What good is a robust DR plan if no one knows what to do when a disaster actually happens? The only way an environment can be used properly with disaster recovery is if all the right people are able to make good decisions based on a planned out directive.

    All IT team staff and key business personnel must be trained in DR event management. Should an actual disaster occur, all key people involved, business or IT, must know the course of action to be taken. This will include alerting, immediate remediation, and damage control.

    The only way a DR plan stays relevant is if there is continuous training happening at all levels.

    This includes the business layer. Today’s businesses are heavily reliant on their IT infrastructure, which means business stakeholders must have a say and action items in the living DR plan.

    DR environments must be tested and verified to be optimally functional. These tests can happen during off hours or through a mirrored offsite environment. There are numerous testing options, and the best one will be dependent on the needs of the IT team.

    You don’t have to pull the plug on a data center to make sure things are working. Consider the following testing recommendations to validate DR environments:

    • Creating shadow users. There are powerful tools that can help create very robust DR strategies. For example, LoginVSI allows organizations to shadow users to mimic impacts on an environment, system, application, and even the business. Using these kinds of tools can help you understand threshold planning, how users interact with an environment, and even test out a secondary site without actually having to failover live users.
    • Leverage virtualization. Load-balancing technologies and failover systems have come a really long way. For example, Citrix’s NetScaler and the F5 ADC each have powerful global load-balancing capabilities. They can also be deployed as virtual appliances. You can test out failover by ensuring that load balancing is working and that users are seamlessly transferred to a secondary environment.
    • Use infrastructure intelligence to test DR. Physical systems can help with DR testing as well. Multi-pathing features allow you to failover entire network components. You can ensure that critical systems continue to stay live by testing critical networking components without having to take your systems down.

    Remember, a DR strategy is absolutely critical for your business. If something happens, you’ll be able to be up and running very quickly. Just think of how much it costs your business to be down for an hour… or a whole day. These strategies are critical with keeping a business agile and very resilient. Make sure to plan, test, document, and maintain your entire DR strategy.

    3:00p
    Seven Things OpenStack DBaaS Can Do that AWS Cannot

    Ken Rugg is the CEO of Tesora.

    There is no denying the success of Amazon in delivering data services as part of their public cloud. Their database as a service (DBaaS) offerings have been some of the fastest growing and widely used and stand-outs in their amazing growth. At the same time, there are some situations where other options, and in particular those based on OpenStack, can provide clear advantages.

    In this article, I’ll share the current state of DBaaS on OpenStack and provide seven concrete examples of how an organization can benefit from using OpenStack Trove relative to the offerings available from Amazon Web Services (AWS). I’ll assume you understand the value of DBaaS and databases in the cloud so I won’t review those here. Let’s get started.

    1. OpenStack DBaaS runs in the data center, as well as public cloud. While there are clear advantages to the flexibility of the public cloud, many enterprises are reluctant to run all databases there since there is inherently less control and visibility to the security and privacy of the data. I’m not going to get into the argument over whether or not this is true – or how much risk is involved. My opinion doesn’t matter. Enterprises have spoken and they prefer to have the flexibility of deciding where their sensitive data is stored.

    In the case of AWS, the only option is public cloud. With OpenStack, your data can be managed on-premises, in a hosted private cloud in a public cloud like OVH, Rackspace or IBM Blue Box. Some of these service providers even offer customers a built-in, Trove-based enterprise DBaaS.

    2. Access to source code. The old saying with regards to open source software is that it can be “free as in speech” or “free as in beer”. In other words, it can give you the flexibility of not locking you in to a particular vendor’s software or API’s or it can be free to download and use with no financial obligation to license the software. In my experience, while it is always nice to get things that you don’t have to pay for, the former is MUCH more important than the latter. The good news is that with OpenStack Trove you can have both.

    On the “free as in beer” side, OpenStack Trove is offered as free, open source software under a very permissive Apache 2.0 license. You can get the code on github or download our company’s Community Edition, an easily installable distribution of this software.

    Since Trove is an official OpenStack project, it benefits from being part of the fastest-growing open source community in history. According to OpenHUB, OpenStack has had contributions from more than 4,000 individuals. OpenStack DBaaS is also supported by a wide and vibrant community with more than 200 individuals contributing from 40 different companies including HPE, IBM and Red Hat.

    In summary, with OpenStack Trove, the source code is freely available, it is developed by a large community of individuals and companies and is governed by a well-known, independent open source body, the OpenStack Foundation. In contrast, the software AWS’s data services and the APIs to manage them are all proprietary and controlled solely by Amazon.

    3. Update on your own timeline. AWS decides when it will make updates to its software and then rolls those out to its users. By contrast, OpenStack and the Trove DBaaS updates are rolled out twice a year and enterprises can decide if and when they will roll out these updates. This provides more flexibility and control over how OpenStack DBaaS is upgraded in comparison with AWS.

    4. No lock-in

    OpenStack DBaaS is virtually the same whether it is run in an on-premises data center or on a hosted private cloud. Building on the benefits of open source, not only do you get a standard set of APIs and source code from an open community, but you can also get this same solution from multiple vendors provided in many locations, operated as a service, or as software that you can operate yourself.

    With this, you can move between hosted and on-premises deployments in the event that it’s necessary to make a change. For example, an application may be developed in a hosted or public cloud environment, and then moved on-premises for deployment if desired.

    With AWS that simply isn’t the case – you use AWS public cloud, period.

    5. Choose your database: greater flexibility with more available choices. Here is the gem of OpenStack’s database as a service: Only OpenStack Trove provides a single framework to operate 13 different database technologies in a consistent way.

    A survey by 451 Research found that enterprises are likely to have multiple database types for different usages. Those will include both SQL and NoSQL datastores, ones optimized for both operational and analytic workloads, as well as both open source databases and commercial database products. As these enterprises move to private, public and hybrid cloud implementations, they bring all of these databases with them.

    While enterprises are now using lots of databases, their management platforms have traditionally been technology specific. This trend has largely continued as database management has moved into the cloud with single database DBaaS offerings such as Azure SQL Database (Microsoft SQL Server), MongoLab (MongoDB) and Cloudant (CouchDB) dominating the landscape.

    While Amazon’s Relational Database Service (RDS) supports a handful of different databases, they are all traditional relational databases with similar architectures. And, AWS provides completely different solutions for data warehouse (Redshift) and NoSQL (DynamoDB).

    Trove takes a fundamentally different approach by creating a pluggable architecture where a wide variety of databases can be managed from a common infrastructure and interface. OpenStack Trove currently supports Cassandra, CouchBase, CouchDB, DataStax Enterprise, DB2, MariaDB, MongoDB, MySQL, Oracle, Percona Server, PostgreSQL, Redis and Vertica with several more currently under development.

    6. Adopting new technologies. Because OpenStack Trove is open source, it can be extended to meet new requirements as they arise. Perhaps you need to support a database that isn’t on the list above or there is new functionality you’d like to add to your favorite database, say a new means of replication. In the case of an open source software like Trove, you can add that support on your own or partner with someone to implement it for you. With AWS, on the other hand, you are limited to whatever AWS delivers and supports and you’ll get it on their timeline.

    7. More control over security. When OpenStack Trove is deployed as a private cloud inside the data center, it is operated by your company’s IT staff who can make sure that it adheres with enterprise best practices and policies for data retention, encryption, backups, etc. While public cloud based DBaaS offerings may provide the tools to do the same, ensuring that those tools are applied often falls to individual developers who provision the databases. When operating Trove inside the envelope of corporate governance and data security, users can be assured that the configurations they are deploying have been reviewed by IT to verify that they follow industry best practices, corporate policies and the applicable data protection laws of the jurisdictions governing the data.

    Conclusion

    These examples highlight the flexibility that OpenStack offers in comparison with AWS. The counter argument is the built-in simplicity of AWS public cloud, which is readily available and accessible.

    In the end, whether you choose AWS, OpenStack or some other solution, database- as-a-service in its many forms can help enterprises be more agile and cost effective versus traditional on-premises databases.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:22p
    Survey: Most Cloud Users in No Rush to Switch Providers
    By The WHIR

    By The WHIR

    Despite the flexibility that the cloud offers customers, a new survey by Microsoft and 451 Research suggests that customers are fiercely loyal to their primary service provider.

    According to the survey, The Digital Revolution, Powered by Cloud, which was released Wednesday at the Microsoft Cloud & Hosting Summit in Washington, more than one-third of customers (38 percent) surveyed said they plan to increase spending with their primary cloud and hosting service provider upon contract renewal.

    In an interview with The WHIR, Microsoft’s vice president, Hosting and Cloud Service Provider Business, Aziz Benmalek said that this indicates the critical role service providers play in continuing to “drive organic growth in existing customers and help them in their cloud journey.”
    “Loyalty is high for the primary services providers,” he said. “In fact, 95 percent of the customers surveyed are expecting to stay with their current primary provider in the next year. Almost 70 percent have an annualized agreement with their service provider.”

    This customer loyalty is critical for service providers as more options hit the market; pulling clients in every direction to fight for a piece of their IT spend.

    The study indicates just how customers are spending the majority of their IT budgets. According to the report, 71 percent of customers’ cloud and hosting budgets are now allocated to managed services, application hosting and security services.

    Benmalek said that managed services in particular is “one of the fastest-growing segments” of IT spend in the cloud.

    Survey respondents said that it is important that cloud and hosting providers have experience helping customers transform existing IT environments to cloud-based services, offer services beyond infrastructure (including managed services), can make recommendations for cloud platforms or apps to purchase, and can migrate workloads to different cloud environments. The survey also suggests that customers want service providers who can be a single point of contact for a variety of cloud services, and can broker contracts with other service providers.

    Microsoft has more than 30,000 hosting partners, and while Benmalek wouldn’t say specifically how much revenue these partners drive for the vendor, he did say that it continues to grow “double digits from year to year.”

    “It’s one of the fastest growing businesses for us,” he said.

    Service providers are one component of Microsoft’s hybrid cloud strategy, Benmalek said. Microsoft’s “three-legged stool” is on-premise, hosted private cloud and services providers, as well as Azure public cloud.

    “It’s a very exciting time for us and I think the vibrant ecosystem we see continues to be a key bet for us,” he said.

    The full 78-page report is available for download on Microsoft’s website.

    Microsoft cloud survey infographic

    This first ran at http://www.thewhir.com/web-hosting-news/when-it-comes-to-cloud-customer-service-still-counts-for-a-lot

    5:28p
    Google Declares War on Boring Data Center Walls

    Usually, if you drive by a data center, there is little indication that the huge gray building you are passing houses one of the engines of the digital economy. Sometimes, if you happen to be a data center geek, you may deduce the facility’s purpose from observing a fleet of massive cooling units along one of its walls, but even those often hide from plain sight.

    Data centers by and large are non-descript, and many, if not most, in the industry like to keep it that way. After all, the fewer people know where a facility that’s critical to a nation’s economy (a stock exchange data center), or one that’s critical to a nation’s security (a mission-critical US Navy data center) is located, the better.

    But Google has decided to flaunt the huge server farms it has built around the world. From images and videos the company has released in the past, the insides of these facilities are works of art. Here’s a 360-degree tour inside one of them:

    Now, the company wants their external walls to both reflect their function in society and be a pleasure to look at.

    “Because these buildings typically aren’t much to look at, people usually don’t—and rarely learn about the incredible structures and people inside who make so much of modern life possible,” Joe Kava, VP of Google Data Centers, wrote in a blog post.

    In what it dubbed the “Data Center Mural Project,” Google has hired four artists to paint murals on the walls of four of its data centers: in Mayes County, Oklahoma; St. Ghislain, Belgium; Dublin, Ireland; and Council Bluffs, Iowa.

    The artists were tasked with portraying each building’s function and reflecting the community it’s in.

    The murals in Oklahoma and Belgium have been completed, and the remaining two are in progress.

    Jenny Odell, the artist who worked on the Mayes County project, used Google Maps imagery to create large collages, each reflecting a different type of infrastructure in use today (Photo: Google):

    google oklahoma mural

    Oli-B, who painted the mural on a wall of Google’s St. Ghislain data center, created an abstract interpretation of “the cloud.” He used elements specific to the surrounding community, as well as the data center site and the people who work there (Photo: Google):

    google st gislain mural closeup

    The four sites are just the start. The company hopes to expand the Data Center Mural Project to more locations.

    More images and video on the Data Center Murals Project website.

    5:59p
    What’s New in Latest Linux Releases by Ubuntu, Red Hat, Fedora
    By The VAR Guy

    By The VAR Guy

    The past few weeks have been good ones for the open source ecosystem. Three major Linux-based operating systems — Red Hat Enterprise Linux, Fedora and Ubuntu — have debuted in final or beta form. Here’s a look at what’s new in all of them.

    Canonical was the first in the pack to launch a new OS recently with the release on April 21 of Ubuntu 16.04 LTS. Code-named Xenial Xerus, the release not only adds to Linux fans’ vocabularies, but also introduces several key new features. Highlights include:

    • Full support for LXD containers, Canonical’s answer to Docker and CoreOS.
    • The introduction of snap packages, which provide a new way to install and update software alongside venerable apt-get.
    • Enhanced support for the CephFS and ZFS distributed cloud storage systems.

    The feature updates in Ubuntu 16.04 are notable because Canonical has tended in the past to avoid introducing big changes in LTS versions of Ubuntu, which it supports for much longer than other releases. Canonical’s decision to pile so many new features into the OS seems to signal the company’s desire to set a new tenor for the next couple of years of Ubuntu development, when Ubuntu 16.04, as the most recent LTS release, will set the base line for future versions of the Ubuntu platform.

    See also: Eight Key Features for IT Managers in Latest Docker Release

    The Red Hat world has seen some significant changes lately, too. On May 10 Red Hat announced the release of RHEL 6.8, as well as the beta version of Fedora 24, the community-supported Linux-based OS that Red Hat uses as a proving ground for RHEL development.

    As a point release, RHEL 6.8 did not introduce any huge new features, but it did update security by replacing openswan with libreswan, an enhanced implementation of IPsec-based VPNs. Red Hat also added new monitoring and backup-and-recovery tools to RHEL.

    The latest beta of Fedora features more changes. Chief among them is enhanced support for OpenShift Origin, the Red Hat/Fedora Kubernetes distribution for developing and managing Docker containers. With official OpenShift packages for Fedora now available, the operating system is one step closer to becoming a production-ready platform for containers.

    Also notable is updated support for the Wayland graphics server in Fedora Workstation, the desktop iteration of the OS. Wayland is a relatively new display server protocol, which promises to make life easier for programmers than X, the display software that has powered most GNU/Linux distributions for many years. Of Wayland, Red Hat says it has the “intention to fully implement it as the default graphics server (replacing X) for future versions of Fedora.”

    The final release of Fedora 24, which also features an updated version of the GNOME desktop environment, is currently set for June 14, 2016.

    This first ran at http://thevarguy.com/open-source-application-software-companies/whats-new-newest-ubuntu-red-hat-and-fedora-linux-releases

    9:20p
    IT Innovators: Gaining Clarity through Cloud
    By WindowsITPro

    By WindowsITPro

    The American Society of Health-System Pharmacists (ASHP) is a leader in the pharmacy sector, providing advocacy, career services, continuing education, meetings/conferences, publishing products, and residency training accreditation. For Gregory Smith, ASHP’s CIO and vice president of Information Technology and Operations, the cloud has enabled his staff to identify what their competencies should be, and to develop those core competencies by delegating other things to the cloud.

    “I empowered my team to think differently about what we do and what our core competencies should be,” said Smith, who oversees all technology, software development and integration for new products, e-commerce platforms and operational support, including customer service to members. “Initially, we were doing too much across the spectrum and, as a result, had to try to become experts in too many technologies. We’ve pushed non-core competencies out to vendors where it’s their core competency and shrunk our core to focus and increase expertise.”

    Smith said ASHP uses a variety of cloud-based systems and services, including Software as a Service (SaaS) platforms for email, contract management, e-invoicing, expenses, travel, and file sharing/collaboration. “We’ve deployed four applications to the cloud via SaaS in the last twelve months. Our e-learning platform is another wildly successful cloud deployment.” He added that the company is integrating a single sign-on solution across internal and external offerings to simplify access for staff.

    According to Smith, it’s that staff that has been instrumental in driving cloud adoption at the company. “Today’s technology users are savvy—they know about the cloud in general terms and that it’s easy to procure SaaS,” said Smith. “The relative autonomy that the cloud offers to end users, as well as the efforts ASHP has made to offload processes and services that are complex, but not necessarily core to the company’s business, has enabled IT pros to refocus their efforts on other activities that can help drive the business and add additional operating efficiencies.”

    ASHP is also realizing new efficiencies and focus through a defined annual business plan that includes a governance model for oversight. “We have 12 KPIs that we track on a monthly basis for performance, with each indicator having a green, yellow and red threshold range,” said Smith. “Our expectations are high, and we’re performing quite well across our plan.”

    Smith said the company is essentially running a hybrid cloud environment, but that he is constantly assessing what is and isn’t running in the cloud—and when it makes sense to change things up; a technique that any IT professional working in the cloud can utilize.

    “We have some applications in a private cloud,” he said. “We’re contemplating bringing them back in-house as the model we’re using is essentially IaaS. We still manage the application and operating system. Comparing costs with today’s server and SAN infrastructure, we can run it more cost effectively inside. Doing so will also allow us to centrally focus on replicating mission-critical applications to a cloud [disaster recovery] vendor.”

    Smith said he is also evaluating whether business intelligence should be hosted in the cloud or on-premise. “The tools that have come into the market recently are amazing, and giving traditional client-installed applications and vendors a run for their money,” he said. “And the pricing model is easier—an operating lease versus a capital expense and depreciation model.”

    As a CIO, author and professor (Georgetown University, graduate program), Smith keeps up with technology trends and how to apply them to make business run better. This is another unique way in which Smith and other IT professionals can get the most out of the cloud.

    “It’s very important to note that the cloud is not always the cheapest option. IaaS environments where we’re still managing the applications don’t always result in the right return on investment and total cost of ownership, says Smith. “We conduct a rigorous analysis before leaping towards a cloud model type.”

    “Going full bore to the cloud will have a dramatic impact on the IT operating budget. It will go up,” says Smith. “It’s not as big an issue for non-profit and educational organizations as it is for the for-profit world, especially large organizations. Moving from a capital intensive IT model where you own the licenses and hardware to essentially a leased model will result in two things: 1) IT operating dollars will rise as a result of all investments to the cloud being expensed versus depreciated over multiple years, and 2) those increased IT expenses will offset revenue and potentially decrease net income and EPS.”

    “Some organizations treasure operating dollars at budget time and dole them out sparingly. In those settings, cloud may not be the best option, as the CIO will have to fight tooth and nail to get the funding. But for those who can look at the bigger picture and see the benefits, they’ll benefit by having applications and systems deployed faster, which can move the needle on revenue and sales. You just need to spend a lot of time with the CFO and fully understand the impact of your cloud model on the bottom line.”

    Deb Donston-Miller has worked as a tech journalist and editor since 1990. If you have a story you would like profiled, contact her at Debra.Donston-Miller@penton.com.

    The IT Innovators series of articles is underwritten by Microsoft, and is editorially independent.

    This first ran at http://windowsitpro.com/it-innovators/it-innovators-gaining-clarity-through-cloud

    9:38p
    Facebook, Microsoft Join Renewable Energy Buyer’s Group

    (Bloomberg) — Facebook and Microsoft are joining forces with environmental groups to promote the development of 60 gigawatts of renewable energy by 2025.

    That’s enough to replace all the coal-fired power plants in the US expected to retire in the next four years. The Renewable Energy Buyers Alliance was formed to break barriers that companies say they face with utilities and regulators in their efforts to reduce carbon emissions, the companies said on a conference call Thursday.

    Large energy consumers such as Facebook and Microsoft that want to run on cleaner energy than utility grids provide have relied for years on buying power directly from renewable energy developers through power purchase agreements. Those opportunities are getting harder to find in some states and are not available to smaller companies, said Brian Janous, director of sustainability at Microsoft. He leads an effort to cut its carbon emissions by 9.5 million metric tons that began in 2012.

    Read more: Starbucks Sustainability Chief to Lead Microsoft’s Green Data Center Strategy

    See also: Akamai Pledges to Source Renewable Energy for Data Centers

    Cooperation Needed

    “Much of the activity so far has been in the form of PPAs and that’s an efficient way to secure renewable energy, but it’s challenging for small companies,” Janous said on the call. “We have a long way to go, and the only way we’re going to get there is collaboration. We need utilities to come in as aggregators and provide new opportunities.”

    Facebook, Microsoft and more than 60 companies were joined in the effort by Business for Social Responsibility, the World Resources Institute, the Rocky Mountain Institute and the World Wildlife Fund.

    In depth: Cleaning Up Data Center Power is Dirty Work

    Facebook wants to get half of its electricity from renewable sources by 2018 and eventually meet all of its needs from carbon-free sources, said Bill Weihl, company director of sustainability.

    “Access to clean energy is one aspect we look for when we site data centers,” Weihl said. “We’re working together with utilities and regulators to design new products so we can all buy more clean energy.”

    The group plans to meet at Microsoft headquarters in Redmond, Washington, next week to share experiences and ideas on how to encourage utilities to let businesses buy more energy from wind turbines and solar panels.

    “We know there’s an appetite,” said Letha Tawney, director of utility innovation at the World Resources Institute. “In some markets there are no options. We’re finding that utilities are excited to offer something.”

    << Previous Day 2016/05/12
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org