Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, February 10th, 2015

    Time Event
    1:00p
    Why Cisco is Warming to Non-ACI Data Center SDN

    There are three basic ways to do software defined networking in a data center. One way is to use OpenFlow, an SDN standard often criticized for poor scalability. Another, much more popular way, is to use virtual network overlays. And the third is Cisco’s way.

    The networking giant proposed its proprietary Application Centric Infrastructure as an alternative to open-standards-based data center SDN in 2013. It is similar in concept to virtual overlays but works on Cisco gear only.

    Some Cisco switches support OpenFlow, but not the current-generation Nexus 9000 line, although the company has said it is planning to change that sometime in the future. Now, however, there’s a new, more interesting twist to Cisco’s SDN story.

    Last week the company announced that Nexus 9000 switches will soon (before the end of the month) support an open protocol called BGP EVPN. Older-generation lines will also support it in the near future. The aim is essentially to make it easier to implement virtual overlays on the latest Cisco switches. Theoretically, support of the protocol would also open doors for third-party SDN controllers to be used to manage these environments.

    The move indicates a realization on Cisco’s part that ACI is still a tough sell for many customers, and that interest in Nexus 9000 switches is there, while interest in going all-in with its proprietary SDN is trailing behind. The company’s own sales figures confirm that reality. According to Cisco, out of the 1,000 or so Nexus 9,000 customers, only about 200 have bought the Application Policy Infrastructure Controller, the SDN controller for ACI environments, since it started shipping in August.

    “This is an interesting move by Cisco in that [the open protocol-based overlay technology] is a difference from their ACI stack,” Mike Marcellin, senior vice president of strategy and marketing at Cisco rival Juniper Networks, said. “Cisco is acknowledging that ACI is not for everybody, and [that] they need to think a little more broadly.”

    A Control Plane for VXLAN

    BGP EVPN is a control plane technology for a specific type of virtual overlays: VXLAN. The VXLAN framework was created by Storvisor, Cumulus Networks, Arista, Broadcom, Cisco, VMware, Intel, and Red Hat. While it describes creation of the overlay, it does not describe a control plane, which is where BGP EVPN comes in.

    EVPN is a protocol generally used to connect multiple data centers over Wide Area Networks. It enables a service provider to create multiple virtual connections between sites for a number of customers over a single physical network while keeping each customer’s traffic private.

    Cisco, Juniper, Huawei, Verizon, Bloomberg, and Alcatel-Lucent have authored a standard for using EVPN as a network virtualization overlay. An NVO is essentially a way to isolate traffic between virtual machines belonging to lots of different tenants in a data center. It is powerful because it enables private connectivity between VMs sitting in different parts of a data center or in entirely different data centers. It also enables entire VMs to move from one host device to another.

    The BGP protocol has played a key role in enabling the Internet. Disparate systems operated by different organizations use it to exchange routing and reachability information. In other words, BGP is the language used by devices on the Internet to tell other devices that they exist and how they can be reached.

    EVPN relies on BGP to distribute control plane traffic, to discover “Provider Edge” devices (devices that connect the provider’s network to the customer’s network), and to ensure that traffic meant to stay within a specific EVPN instance doesn’t venture outside.

    Making VXLAN on Cisco Easier

    Customers have been able to set up VXLAN overlays on Cisco switches before, but without BGP EVPN it was a tedious, complicated task, Gary Kinghorn, product marketing manager for Cisco’s Data Center Solutions Group, said.

    The VXLAN spec requires users to enable core switches in their networks to run multicast routing. With multicast, if one VM needs to reach another, several copies of a packet get sent out to several destinations, and only one of them reaches the intended recipient. BGP EVPN introduces an alternative and more scalable approach, where the controller simply keeps track of where each VM resides, so only one copy of a packet is sent directly to its destination.

    The controller also gels with OpenStack, so users of the popular open source cloud architecture can use it to provision and automate their virtual networks.

    While BGP is not a requirement for using EVPN, most people that adopt EVPN will use BGP, because it’s so widespread and well known, Kinghorn said.

    Juniper No Stranger to EVPN

    Juniper has supported EVPN for some time now. “It’s always been our preferred data center connectivity solution, because it is standards-based,” Marcellin said. “We support EVPN today on our routing platform. We basically built it into Junos.” Junos is the company’s network operating system.

    Juniper has its own SDN controller, called Contrail, which also leverages BGP. The company is also looking into OpenFlow support for Contrail, Marcellin said. While it can potentially be used with VXLAN overlays, it is really an alternative overlay technology. Contrail can interoperate with Cisco gear or even create overlays on top of Cisco switches, he added.

    Interest is growing in using the combination of EVPN and VXLAN, although Marcellin hasn’t seen widespread adoption at this point. “It’s actually a pretty elegant way to solve basic challenges some people face.”

    In Nascent Market, Variety is a Good Thing

    By adding support for EVPN, Cisco is broadening its potential reach in the growing SDN market. This is a new area, and it’s too early to tell which of the various technological approaches will ultimately enjoy the most use. While it may no longer have the reputation of the cutting edge tech company, Cisco still enjoys the biggest install base among rivals in the data center market, and broadening the variety of data center SDN ideas it supports only puts it in a better position to preserve its market share in this quickly changing environment.

    4:30p
    Atlanta: Poised for New Data Center Development

    Mike Dolan, an executive vice president for JLL, and Ryan Fetz, a vice president with the firm, are in JLL’s Atlanta office and are members of the firm’s global Data Center Solutions Group.

    With more and more businesses outsourcing their IT operations and the growing popularization of cloud computing, user demand for space in third-party, multi-tenant data centers is set to spike in the years ahead.

    North American data center owners can expect their combined revenues to grow by 32 percent by 2016, to a total of $14.8 billion, according to 451 Research. Additionally, employment in the data processing and hosting services industry will increase by 17 percent over the next five years, IBISWorld predicts.

    A Boom in Multi-Tenant Data Centers

    Even with the rising demand, the vast majority of data center real estate markets in the United States, including Atlanta, are currently favorable for tenants and in shape to remain that way through the third quarter of next year, according to JLL’s recent 2014 Data Center Outlook. However, by late 2015 and early 2016, supply and demand will be such that rents will rise, spurring a round of new construction to keep pace with demand.

    At that point, Atlanta in particular will likely be poised for a boom in third-party, multi-tenant data center development, in both the retail and wholesale categories (broadly speaking, among other differences, retail data centers offer tenants spaces ranging up to 5,000 square feet in size, while wholesale facilities offer spaces typically between 5,000 to more than 50,000 square feet). Metro Atlanta currently has about 1.5 million square feet of data center space, with 40,000 square feet under construction (currently under roof) and another 160,000 square feet planned.

    The Atlanta metro area is appealing to data center owners and developers for a variety of reasons, including:

    • Reasonable land costs and a low overall cost of doing business.
    • Low natural disaster risks.
    • Aggressive local and state government incentives.
    • An abundance of telecommunications fiber. Atlanta is the main Southeast fiber hub with three Internet exchange points.
    • Inexpensive and reliable power. Electricity rates in the metro area, currently in the 4.7 – 5.5 cents per kilowatt-hour range, remain very competitive nationally – about 20 percent below the national average. Georgia Power’s investment in a large nuclear power plant in Waynesboro, Georgia, will help ensure a clean and abundant supply of sustainable power in the future.
    • The accessibility provided by Hartsfield-Jackson Atlanta International Airport. Eighty percent of the U.S. population lives within a direct two-hour flight of Atlanta. This convenience allows data center providers and customers from across the globe easy access to their sites and projects in the city.
    • Widespread industry demand. Data center users in Atlanta are distributed fairly evenly among a range of growing industries: technology (30 percent), banking and financial services (30 percent), healthcare (25 percent) and telecom (15 percent).

    Looking ahead, companies across Atlanta, the United States and the globe appear set to increase their reliance on third-party data center space. Moving away from in-house ownership and management of their own data centers allows firms to mitigate capital expenditures, eliminate resource-intensive maintenance responsibilities, reduce facility operating expenses, respond efficiently to rapid changes in data and equipment needs, access more extensive technical expertise, and free up financial resources to focus on value-generating business initiatives.

    With the popularity of third-party data centers showing no signs of waning, and with the metro area’s array of development-friendly features, it’s easy to predict a significant wave of new data center construction in Atlanta in the coming years.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:00p
    Webinar: Practical Advice on Making DCIM Work for You

    Join us Tuesday, February 24, at 1 p.m. EST for a special one-hour webinar exploring DCIM and its importance in the increasingly complex and rapidly changing data center landscape.

    DCIM experts Jennifer Koppy, Research Director for IDC’s Datacenter Trends & Strategies and Dhesi Ananchaperumal, SVP of Engineering for Infrastructure Management solutions at CA Technologies, will give the audience practical advice to ensure they get the most out of their DCIM solution.

    Throughout the presentation our speakers will address:

    1. The best approaches for DCIM deployment
    2. How to capture operational metrics to improve data center performance
    3. New ways to unify capacity management across all resources as data center density increases
    4. How to use DCIM’s 3D maps to deliver unrivalled visibility
    5. Ways to integrate DCIM with existing systems and processes
    6. How to use DCIM to shift development and production workloads between infrastructure resources to maintain a consistent user experience

    There will be a Q&A session following the presentation.

    Additional Webinar Details

    Title: Practical Advice on Making DCIM Work for You
    Date: Tuesday, February 24, 2015
    Time: 1 p.m. Easter/10 a.m. Pacific (duration, 60 minutes including time for Q&A)
    Moderator: Bill Kleyman
    Register: Sign up for the webinar

    About the Speakers

    jennifer-koppy-idc_THJennifer Koppy is a Research Director for IDC’s Datacenter Trends & Strategies team, which explores the development and adoption of solutions for the datacenter, including IT infrastructure and resource management. She covers the worldwide market for datacenter infrastructure management (DCIM) and follows the trends, technology changes, and market forces that impact both enterprise and service provider datacenters.

    Koppy is an 18-year veteran of IDC, covering multiple technology areas including servers, networking, and client hardware. She has done extensive work in end-user perceptions to shape marketing plans and provides strategic advice on measuring and communicating business value.

     

    admin-ajax_THDhesi Ananchaperumal, SVP Software Engineering at CA Technologies and a member of CA Council for Technical Excellence. He is responsible for leading and overseeing the day-to-day activities of the engineering team of Infrastructure Management Solutions business unit – products include CA Performance Management, CA Mediation Manager, CA Spectrum, CA DCIM, CA Capacity Manager, CA eHealth. Dhesi was awarded the designation of Distinguished Engineer (DE), in this role he provides guidance to executive leadership on the introduction of new technologies, mentorship to engineering community and counsel in driving the implementation of new architectures, products and solutions to ensure innovation, integration and customer value.

    An industry veteran with an expertise in IT Infrastructure and Energy Management, Dhesi has published numerous white papers and articles and is a frequent speaker at Data Center and IT conferences. Prior to his current role, Dhesi was involved with Emerging Technologies, Business Relationship Management and M&A activities for CA. Prior to joining CA, Dhesi held various management and technology positions at Netegrity, Affinnova, AMS and Infosys Technologies.

    Dhesi holds a Bachelor’s degree in Electronics Engineering, a Master’s degree in Computer Science and a Master’s degree in Electronics Design Technology from the Center for Electronics Design Technology, India. Dhesi also has ISC2-CISSP, Six Sigma and ITIL certifications.

    Sign up today and you will receive further instructions via email about the webinar.

    5:00p
    Data Center World Spring Conference Slated for April

    “Keeping Pace with an Accelerated Environment” is the theme of AFCOM’s Data Center World event, which convenes April 19-23 at The Mirage Hotel and Casino in Las Vegas. Early bird pricing ends Feb. 13.

    The gathering of 1,000-plus data center operators and managers and more than 150 exhibitors, includes more than 70 educational sessions that revolve around the theme of keeping pace with the current pressures facing the industry. Broadly, these situations include areas such as cost, capacity, cloud, the evolving hardware and software environment, and external threats. (BTW, the event also provides ample opportunity for peer-to-peer conversations and networking.)

    The infographic below was developed by AFCOM to describe the issues data centers are currently experiencing, and the educational sessions match these topics as well as others.

    Infographic from AFCOM Data Center World.

    Infographic from AFCOM Data Center World.

    Early Bird pricing ends Feb. 13 for the upcoming 2015 Data Center World Global Conference and Expo. To learn more and register, visit Data Center World’s website.

    And feel free to share the infographic!

    5:04p
    Asetek Wins $3.5M Liquid Cooling Contract for Two Data Centers

    Asetek has signed a $3.5 million direct-to-chip liquid cooling deal with the California Energy Commission for two large-scale supercomputing data centers.

    The 24-month project starts July, with a report from the CEC to follow 12 months after. It will include installation of Asetek’s RackCDU solution across 90 racks. Direct-to-chip liquid cooling will support servers from multiple suppliers. Monitoring equipment will also be installed to track energy and cost savings.

    Denmark-based Asetek retrofits a server motherboard with small pipes that bring coolant to the CPUs. This is one of the approaches of direct-to-chip cooling. Others include submerging motherboards in tubs of dielectric oil or sealing server chassis and flooding it with coolant.

    Asetek recently reached a settlement agreement in a patent infringement lawsuit with CoolIT Systems, another liquid cooling vendor.

    There has been renewed interest in liquid cooling recently, as several vendors have been predicting sky-high power densities in data centers. Densities have not risen nearly as quickly as expected, however. Air cooling has seen major advances in efficiency, while liquid cooling has remained largely favored in niche computing needs. The CEC supercomputer deal is a prime example.

    Still, it’s too early to rule out a rise of direct liquid cooling in data centers in the future, as modern analytics engines need to crunch through more and more data almost instantaneously.

    The CEC study will make energy and cost savings data from the deployment publicly available.

    “This project is evidence of Asetek’s momentum in the data center and supercomputing segments and further validates the value of Asetek’s direct-to-chip liquid cooling for high performance and high utilization data centers,” said André Sloth Eriksen, founder and CEO of Asetek, in a release.

    Funding for the project will be provided through California’s Electric Program Investment Charge (EPIC) Program, managed by the CEC.

    5:26p
    IBM and SoftBank Teaching Watson Japanese

    IBM’s cognitive computing system Watson is going to Japan through an alliance with Japanese telco and Internet service provider SoftBank. The goal is to bring the new class of Watson-powered application services to market in the region, but first, Watson has to learn Japanese.

    Watson excels at natural language recognition — a talent first publicly demonstrated during a game show Jeopardy, aired in U.S. in 2011, where Watson competed and won against two of the show’s past champions. But Jeopardy is a U.S. show, and questions were asked in English.

    IBM and SoftBank will collaborate on building a local ecosystem of partners, entrepreneurs, app developers, and investors around the Watson initiative in Japan. As part of the alliance, the technology will also be embedded into SoftBank’s social robotics platform dubbed “Pepper.”

    Pepper is a humanoid robot able to communicate through voice, touch, and emotions, and in combination with Watson, it might be the most impressive robotics showcase the world will have ever seen.

    For Watson, Japanese mastery will be a major challenge. It is a more difficult language to learn than English or Spanish because of its reliance on kanji, a diagrammatical alphabet. With a different alphabet also comes a different structure, so the switch is not quite as easy as plugging into something similar to Google Translate.

    “The Japanese language presented IBM researchers with a number of unique challenges to overcome, most notably the first time the Watson system has learned a language that relies on characters not shared by the Western alphabet,” Paul Yonamine, general manager of IBM Japan, said in a statement.

    If the project is successful, the applications are endless in markets like healthcare, banking, insurance, telecommunications, and the automotive industries.

    “The alliance with SoftBank, an industry pioneer, will bring new Watson capabilities to organizations in one of the most innovative parts of the world,” said Mike Rhodin, senior vice president, IBM Watson Group, in a release. “Together, we will be able to more quickly deploy Watson to enterprises throughout Japan, while building a rich ecosystem of partners, entrepreneurs, developers and other third-parties to design and deliver an entirely new class of cognitive computing apps.”

    IBM made some Watson capabilities available to the masses in 2013 and now claims that over 6,000 applications use them. IBM recently added some new Watson services, extending the types of functionality developers can build into applications.

    6:00p
    Cloud and Hosting Industry Reacts to VMware’s Unified Cloud Strategy

    logo-WHIR

    This article originally appeared at The WHIR

    VMware made a series of announcements last week in support of unified “One Cloud” platform for the hybrid cloud that, according to the company, is the “first unified platform for the hybrid cloud.”

    On Monday, Feb. 2, VMware announced several updates that support this One Cloud effort including the new vSphere 6 platform, VMware Integrated OpenStack, Virtual SAN 6, vSphere Virtual Volumes, and vCloud Air Hybrid Networking Services.

    With VMware being a direct provider hybrid and public cloud services through vCloud as well as through its ecosystem of cloud service provider partners, many voices from the hosting and cloud community have weighed in on VMware’s new capabilities and what it means for the industry.

    Red Hat Cloud Product Strategy GM Bryan Che took to the Red Hat blog to explain that, while appealing, the idea that One Cloud could offer a single unified cloud for running both cloud-native and traditional applications will inevitably run into problems. “[B]y attempting to mash these two worlds together, all One Cloud provides is one limited cloud that is not optimized to run any workload,” Che writes.

    Cloud-native applications (which are architected to be horizontally scalable and resilient against VMs shutting down) and traditional applications (which are designed scale up to bigger VMs rather than scale out requiring resiliency) have have different infrastructure requirements.

    Cloud-native apps running on a scale-out OpenStack cloud are limited in their capacity to scale out when OpenStack is running on a scale-up platform like vSphere or Red Hat Enterprise Virtualization for that matter. He also notes, “OpenStack is not optimized to run traditional workloads, so you end up with one cloud that runs both cloud-native and traditional apps, but neither very well.”

    In a blog post, Mirantis’ Jodi Smith notes that VMware’s OpenStack business is quickly growing due to enterprise IT’s interest in OpenStack. By including OpenStack under its hybrid cloud umbrella, VMware gives its current customers an option to transition to OpenStack without changing vendors.

    Smith notes that VMware is also trying to stay ahead of the trend towards containers with its Open Container API. “VMware keeps finding ways for customers to keep using their tools as they march boldly into the brave new world of the cloud,” she writes.

    In an interview with The WHIR, Carl Brooks, who covers cloud computing and the next generation of IT infrastructure for 451 Research, says that VMware has been doing a good job of enhancing its enterprise offerings over the past year to compete with Amazon’s growing enterprise services division, as well as companies like HP, IBM and Oracle which go after large enterprise clients.

    A major appeal of VMware and its newly enhanced services, he says, is that enterprise IT can get these new capabilities but with the VMware interfaces with which they’re familiar as well as managed services and support.

    Enterprises using VMware have the option of using the vCloud Air IaaS run by VMware or outsource their data center or managed infrastructure through vCloud Air partners like Bluelock or Carpathia.

    “There are tons and tons of VMware hosters out there and also selling the vCloud air technology stack into the enterprise and what they really want this to mean for enterprises is to say ‘all your needs are met by this one thing,’” Brooks says.

    VMware has built a reputation as the default IT vendor for many large enterprises, and rather than risk losing its relevance, it’s positioning itself as a trusted way for customers to gain the benefits of hybrid cloud. He says, “They want to set themselves up – as they essentially have in the past – as the gatekeeper of record for infrastructure automation and orchestration.”

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/cloud-hosting-industry-reacts-vmwares-unified-cloud-strategy

    6:44p
    Newby’s Netrality Buys 25-Story Houston Data Center and Carrier Hotel

    Netrality, a joint venture between real estate firm Amerimar Enterprises and entrepreneur Hunter Newby, has bought a large Houston data center and carrier hotel at 1301 Fannin Street.

    The deal is strategic for Netrality. Houston is an important network connectivity hub because of a high volume of network carriers and because of its proximity and direct network links to Mexico and the Gulf of Mexico, which is so important to the U.S. oil and gas industry, Houston’s biggest economic driver.

    The building’s tenants include data center providers and enterprise data centers. Example of the former is Internap, and example of the latter is Exxon Mobil. The oil and gas giant, ranked as the second-largest publicly traded company in the U.S. on the Fortune 500 list, has its global data center in the building, according to Foursquare.

    The seller was Griffin Partners, which bought the building in a joint venture with CenterSquare Investment Management Fund in 2007. The companies did not disclose acquisition price in the recent Netrality transaction.

    The 25-story building has 1.1 million square feet total, including about 400,000 square feet of data center space. It offers access to more than a dozen network carriers.

    Netrality plans to build an additional carrier-neutral meet-me room for its Houston data center and network provider customers by mid-2015. There is also room to build up to 100,000 square feet of additional data center space in the building.

    The company has contracted real-estate brokerage Lee and Associates, a Griffin affiliate, to lease the space that is currently vacant.

    “As the fourth largest city in the United States, Houston is a critical location for network operators,” Newby said in a statement. “Further, its proximity and direct network access to Mexico and the Gulf of Mexico make it a global gateway and therefore strategic for us and our customers.”

    Newby and Amerimar have been buying carrier hotels with carrier-neutral meet-me rooms in top U.S. data center markets for about three years. The joint venture got its name (Netrality) only this year, in January.

    Netrality owns buildings in New York, Philadelphia, Kansas City, Chicago, and now also Houston.

    7:00p
    US Announces Dedicated Cyberthreat Center

    logo-WHIR

    This article originally appeared at The WHIR

    An anonymous senior Obama administration official announced on Tuesday that the government is creating a new cybersecurity agency called the Cyber Threat Intelligence Integration Center (CTIIC). The new entity will monitor threats, collect information and analyze potential threats.

    The new agency will be an “intelligence center that will ‘connect the dots’ between various cyber threats to the nation so that relevant departments and agencies are aware of these threats in as close to real time as possible,” the official said.

    This announcement comes after a year of high profile hacks including one of the largest on Anthem just last week that exposed 80 million accounts and private information. Among other recent attacks are JP Morgan exposed, Home Depot,Kmart, Dairy Queen, Xbox, Sony and ICANNhave all been the target of hacks designed to obtain sensitive data. The government was also the target of several attacks withUS military social media accounts hacked,US Department of State email and at risk with faulty cloud contracts.

    “Obama will issue a memorandum creating the center, which will be part of the Office of the Director of National Intelligence,” according to the Washington Post. “The new agency will begin with a staff of about 50 and a budget of $35 million.”

    Other federal agencies such as the Department of Justice, Federal Bureau of Investigation, Department of Homeland Security and National Security Administration have cybersecurity related units. The DOJ just announced such a dedicated unit in December. Having an agency strictly for cybersecurity may engender more trust from the public given that entities such as the NSA have a reputation for ignoring the public’s right to privacy. The government sometimes keeps known cyberthreats a secret.

    Detractors of Obama’s proposed new cybersecurity legislation believe there should be reform in the NSA first before the flow of cybersecurity related information flows between the government and private sectors.

    “The cyberthreat is one of the greatest threats we face, and policymakers and operators will benefit from having a rapid source of intelligence,” Lisa Monaco, assistant to the president for homeland security and counterterrorism, to Reuters. “It will help ensure that we have the same integrated, all-tools approach to the cyberthreat that we have developed to combat terrorism.” Monaco will announce the new agency later Tuesday in a speech at the Wilson Center.

    Melissa Hathaway, former White House cybersecurity coordinator commented on the need for a dedicated agency despite the existence of several departments that currently coordinate on cybersecurity threats. “We should not be creating more organizations and bureaucracy,” she said to the Washington Post. “We need to be forcing the existing organizations to become more effective — hold them accountable.”

    A recent report found that government insiders are the biggest threat to federal agencies. Last week the President suggested a $16 billion cybersecurity budget for 2016 and in January proposed new cybersecurity legislation. He eluded to the CTIIC in his Jan. 20 speech saying that the government would begin to combat cyberthreats in the same way they have terrorism with a dedicated team.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/us-announces-dedicated-cyberthreat-center

    9:59p
    Apple to Get 130MW from Central California Solar Project

    Apple, which has committed to fully powering its operations with renewable energy, has signed a 25-year agreement to buy the output of 130 megawatts of generation capacity of a future solar project in Monterey County, California.

    Apple’s data centers consume more energy than any other part of the company’s operations. That has been the case since fiscal 2013, when energy consumed by its corporate facilities was about 230,000 megawatt-hours, and energy consumed by its data centers was about 300,000 megawatt-hours, according to a company environmental responsibility report.

    Energy consumption by the Apple data center fleet is about to go up once again. The company announced earlier this month it was going to build a massive data center in Arizona. Given the Arizona project and Apple’s public commitment to operations powered by 100-percent renewable energy, the pressure to secure that energy is on.

    The company signed a long-term $848-million power purchase agreement with First Solar, a Tempe, Arizona-based photovoltaic solar systems to receive nearly half of the energy the future solar farm will produce. First Solar signed a PPA with the California utility PG&E for the remaining 150 megawatts.

    Called California Flats Solar project, the 2,900-acre site takes up a small portion of a property owned by Hearst Corp. in Cholame, an unincorporated area within Monterey County. Besides its association with the famous American media dynasty, Cholame, which is mostly cattle-ranch land, is known for being the site where actor James Dean died in a car accident in 1955.

    First Solar plans to start construction in Cholame by mid-2015 and be finished by the end of next year. County planning officials approved the project in January, and the county board of supervisors is due to take the final vote on it today.

    Long-term PPAs are often what makes such large-scale renewable energy projects possible, since they secure the financing necessary. California Flats was no exception.

    “Apple’s commitment was instrumental in making this project possible and will significantly increase the supply of solar power in California,” Joe Kishkill, First Solar’s chief commercial officer, said in a statement. “Over time, the renewable energy from California Flats will provide cost savings over alternative sources of energy as well as substantially lower environmental impact.”

    << Previous Day 2015/02/10
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org