Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, November 19th, 2014

    Time Event
    1:00p
    French Web Host Builds ARM-Powered Bare Metal Cloud

    PARIS – The French government’s 2012 decision to give upward of €280 million to two companies so they could build cloud computing infrastructure that would keep French data within the country’s borders quite predictably upset a lot of people.

    The government had hand-picked two firms, backed by massive IT vendors and telcos, and gave them a substantial financial advantage on the public’s dime. Everybody from smaller hosting firms like Ikoula to giants like IBM went on record complaining that the government was giving Cloudwatt (backed by Orange and Thales) and Numergy (backed by Bull and SFR) an unfair leg up.

    Yann Leger, vice president of cloud computing at Online.net Labs, was disappointed not only because competition was getting a government handout but also because Numergy and Cloudwatt had a chance to do something really innovative with the money but instead came up with solutions that in his view were quite underwhelming technologically.

    Over the past 2.5 years or so, a small team at Online, hosting services division of Iliad, one of France’s largest telcos, has been busy building a cloud almost entirely from scratch, both hardware and software. They didn’t want to use OpenStack (the option Numergy and Cloudwatt went with), figuring it would take them just as long to create their own cloud architecture as it would to understand OpenStack and adjust it to their needs, Leger said.

    World’s First ARM Server Cloud

    The cloud came online in October, running on thousands of tiny servers powered by ARM processors. Online is the first ever service provider to build a commercial Infrastructure-as-a-Service offering using ARM chips (at least the first ever to admit it publicly).

    There have been rumors earlier this year that Amazon Web Services was eyeing the architecture, but they have not been confirmed, and AWS indicated continued wholehearted support of Intel by announcing last week that its upcoming EC2 instances would run on custom Intel Xeon chips. Chinese Internet giant Baidu built a cloud storage service using ARM-powered servers, but Online is the first to offer an ARM server cloud where users can spin up and spin down servers and pay for what they use.

    Yann Leger, vice president of cloud computing at Online.net Labs, pointing at a network switch powering the company's cloud.

    Yann Leger, vice president of cloud computing at Online.net Labs, pointing at a network switch powering the company’s cloud.

    Online chose ARM chips because they consume less power than traditional x86 processors. U.K.-based ARM Holdings licenses its architecture to chip makers. It was originally designed for smartphones (most smartphones in existence are powered by ARM chips) and embedded devices, but more than a handful of companies have been making ARM chips for servers, targeting users who are conscious of their energy consumption or going after applications that need a high degree of processor customization.

    Leger’s team went with the Armada XP chip by Marvell, a semiconductor company based in Bermuda. Baidu’s storage cloud runs on the same chips. The Online team chose a 32-bit chip for its cloud because it is cheaper than the 64-bit ARM chips a company called Applied Micro started shipping earlier this year, Leger explained. Applied Micro’s X-Gene is quite expensive at the moment, and he and his colleagues are waiting for AMD to release the next generation of its ARM SoCs, which they may consider for a future version of their cloud hardware.

    A 3,500-Node Cloud in Four Racks

    One of Online’s servers easily fits in the palm of a hand, and about 900 of them fit in a standard data center rack. Leger calls a single unit of deployment a “platform.” A single platform includes 3,500 servers in 12 chassis sitting in four racks, plus one rack with electrical transformers, converting three-phase power to 48V to feed to the dual electrical feeds that supply power to the chassis.

    A single Online cloud server fits in the palm of a hand.

    A single Online cloud server fits in the palm of a hand.

    Iliad-made chassis are filled with blades, each blade supplying 18 servers. In the back of each blade is a storage drive. A platform has 80 ARM servers that handle management tasks, such as APIs, DNS, monitoring, and traffic metering.

    There are 10 standard x86 servers that also handle management tasks, including boot. Another set of standard x86 servers supports AWS S3-compatible object storage.

    Bare-Metal Cloud for Scale-Out Applications

    Leger said Online’s cloud will work fine for any workload that scales horizontally, which can mean any web frontend for example. The cloud wouldn’t work too well for something like an Oracle database, which scales vertically. The architecture is distributed, so you can spin up as many servers as you need.

    The cloud isn’t virtualized. Users spin up “bare-metal” servers.

    A single Iliad-made chassis carries about 900 Online cloud servers.

    A single Iliad-made chassis carries close to 300 Online cloud servers.

    Early this month, when we visited Online’s offices and data centers in Paris, about 2,000 users had signed up to try the cloud. The company planned to launch the service into general availability later in November, at which point 7,000 nodes would be deployed, Leger said.

    Online has not disclosed pricing yet, but he said the rates would be comparable with DigitalOcean and much lower than AWS. And it will, at some point in the future, be compatible with OpenStack, but only because Leger’s team wants to make it easy for users to move their applications from OpenStack clouds to Online’s.

    Room to Grow

    The cloud currently lives in a 10 megawatt Iliad data center on the outskirts of Paris. The company has another data center in the suburbs and is in the process of building out a third one in the city proper.

    If the new cloud offering is met with a lot of demand, there is room to grow the infrastructure within Online’s existing footprint, but the hardware can be in any data center. Leger anticipates the company’s ability to supply enough servers to keep up with demand to be a bigger issue than data center capacity.

    “If we have that problem, we will be happy to solve it,” he said.

    4:30p
    Erroneous Beliefs That Could Leave You Susceptible to DDoS Attacks

    This Industry Perspective was written by Xuhua Bao, Hai Hong and Zhihua Cao of NSFOCUS.

    Part one of this two-part series discusses the serious nature of DDoS attacks and introduces some of the many assumptions that could leave networks vulnerable to attack.

    DDoS attacks are on the rise and so too are efforts to defeat them. Analysts forecast the global DDoS prevention market to grow at a rate of 19.6 percent from 2013-2018. This market increase suggests that DDoS attacks are more than just irritating. People in the know understand that these attacks not only cause disruption but can cause damage and tarnish reputations as well.

    However, many still don’t understand how these attacks operate, and this ignorance can cost them. In the discussion following, we outline several erroneous beliefs about DDoS attacks that data centers, ISPs and enterprises should become familiar with. The discussion will conclude next week during part two of this series.

    Error #1: Botnets Are the Source of All DDoS Attacks

    This is a commonly held belief but, in fact, not all attacks are carried out by botnets composed of personal computers that have been hijacked by hackers. As technology has advanced, the processing performance and bandwidth of high-performance servers used by service providers have rapidly increased. Correspondingly, the development and use of traditional botnets composed of PCs have slowed.

    Besides the processing capability factor, PCs normally have very limited bandwidth resources, and their in-use periods fluctuate. Therefore, some hackers have begun to look to high-performance servers like those used during Operation Ababil’s attacks on U.S. banks. In addition, attacks are not always carried out by commandeering sources; the hacking group Anonymous prefers to launch attacks using large numbers of real participants. We call this a “voluntary botnet.”

    Error #2: Hackers Launch DDoS Attacks to Consume Bandwidth

    In fact, DDoS attacks can also be designed to consume system and application resources as well. Thus, the size of the attack traffic is only one of several aspects that determine the severity of an attack. Sometimes, people mistakenly assume that SYN flood attacks are a type of DDoS attack that targets network bandwidth resources. In fact, the primary threat posed by SYN flood attacks is their consumption of connection table resources. Even with exactly the same level of attack traffic, a SYN flood attack is more dangerous than a UDP flood attack.

    Error #3: DDoS Attacks Come in One Speed: Rapid

    UDP flood attacks, SYN flood-type attacks, RST flood-type attacks – when DDoS attacks are mentioned, these are what most people think of. They therefore assume that all DDoS attacks are flood-type attacks. In fact, although these types of attacks account for a large proportion of DDoS attacks, not all attacks are flood-type.

    Aside from flood-type attacks, there are also low-and-slow attack methods. We define the essential nature of a DDoS attack as an attack that consumes a large number of resources or occupies them for a long period of time in order to deny services to other users. Flood-type attacks are used to quickly consume a large number of resources by rapidly sending a large amount of data and requests to the target.

    In contrast to the flood-type attacks’ “hare,” the low-and-slow attacks are more tortoise-like in their approach. They slowly but persistently send requests to the target and thus occupy resources for a long time. This activity eats away at the target’s resources bit by bit. If we view a DDoS attack as an assassination, a flood-type attack is like an assassin that uses a machine gun to take out his target at close range. A low-and-slow attack offers its target a death by a thousand cuts.

    Error #4: If You’re Not a Big-Name Brand, Hackers Won’t Bother Attacking You

    The assumption goes like this: my website is small, so I don’t need to worry about DDoS attacks. However, if you operate a website, even if you derive little income from it or engage in non-profit activities, you’ll get no comfort from these wrong-headed ideas: “There are so many websites, and most are more famous than mine – a hacker wouldn’t waste their time on me” or “Our operation is just now gaining momentum, but we still don’t make much money and we are not offending anyone – there’s no reason a hacker would choose to attack us.”

    The truth is that these days, any site can be considered fair game. When cybercriminals are choosing extortion targets, they know that attacks on major websites may be more profitable, but at the same time the costs and risks are usually also greater. However, with smaller sites, their defenses are generally weaker and an attack is more likely to succeed. Furthermore, competition is one of the major reasons that spur DDoS attacks. Newcomer businesses may attack established businesses in order to steal away customers, and established businesses may attack newcomers to remove any potential threat they may pose. Malicious retaliatory attacks might not be concerned with size and scale; they may just want to prove a point.

    Error #5: Only Hackers Have the Know-How to Launch DDoS Attacks

    At present, most hackers specialize in a certain area. Some specialize in discovering vulnerabilities, some develop tools, some are responsible for system intrusion and some are adept at processing account information. For DDoS attacks, some hackers create and maintain so-called “attack networks.” Some of them exploit botnets and some take over high-performance servers. After assembling their attack capability, they rent out their resources to a customer. It is not necessary for this hacking customer to have any specialized knowledge of the technology. DDoS attacks can be carried out by cybergangs, the business competitor across the street or a disgruntled employee. With hackers for hire, there are potential attackers everywhere.

    We will conclude our discussion on the serious nature of DDoS attacks during part two of this article being published next week.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:00p
    DCIM: Building the Foundation for Agile Data Centers

    The modern data center is the hub for cloud computing, virtual platforms, and of course, all types of new applications. The proliferation of BYOD and IT consumerization has placed even greater amounts of resource demands around data center platforms. Organizations and data center administrators are constantly looking for ways to create a more agile and efficient infrastructure. But where do they begin?

    The ability to deliver IT service in an agile, cost-effective manner is increasingly a strategic business differentiator as the demand for IT swells and strains the current physical infrastructure management methodology. Much of the focus on flexible data centers revolves around the software application, compute, networking, and storage layers. Only after an initiative has begun are the most suitable best practices in data center infrastructure management (DCIM) considered.

    This whitepaper from IDC and Nlyte discusses the trends affecting DCIM implementations and highlights the role that powerful data center control platforms play in the DCIM market.

    The conversation starts here:

    Delivering a highly virtualized data center has become the new management focus to gain flexibility.

    Data centers now use software components that virtualize and federate data center-wide hardware resources such as storage, compute, and networking and eventually facilitate central utilization of power generation and distribution equipment. The goal is to tie together these disparate resources and make them available in the form of an integrated service — a service that is governed by policies and processes and that can be metered and measured.

    What are some of the big benefits?

    Creating an agile data center requires a unified, collaborative effort. This effort can result in reducing the time, cost, and risk of not only daily operations but even more strategic migrations and consolidations. Beyond the economic benefits, DCIM solutions support enterprise-wide IT goals, including the following:

    • Respond quickly to end-user demands
    • Gain control over the plethora of ad hoc and manual processes that plague data center management today
    • Support audit and compliance initiatives
    • Focus on innovation instead of maintenance
    • Rationalize decisions on insourcing versus outsourcing

    Download this whitepaper today to learn how a comprehensive DCIM solution can provide visibility into resources ranging from the critical facilities (power and space) to the servers/storage and even the network connections. Providing a dynamic, visual map of the physical data center and all of its layered applications with full planning and impact analysis allows IT and facilities organizations to support the management of infrastructure components and view the connectivity between components.

    Critical to the success of a DCIM implementation is how well the entire organization interacts with and updates the solution to ensure the continued integrity of the system. A workflow management system that supports and proactively drives changes is important in gaining widespread use of the system and in enforcing best practices in data center management.

    6:28p
    SAP Making $150M Australian Government Cloud Push

    Germany-based software company SAP is investing AU$150 million (approx. $130 million USD) in Australia in a bid to capture government business.

    The investment will include a data center to support its HANA enterprise cloud, and the creation of a new facility to be known as the SAP Institute for Digital Government. Both are set to open in the second quarter of 2015. The company does not plan to build the data center itself, but has yet to name a provider.

    SAP already has solid government business in Australia, touting over 50 core federal agencies as customers. Australia’s government has a cloud-first mandate much like the United States, releasing the policy in October. The country acts as a hub to the wider AsiaPac market. The enterprise market in general is considered ripe for cloud adoption.

    The institute will act as a hub for developing best practices. Officials will be able to check out prototypes and proofs of concept of SAP technologies as well as interact with technical experts.

    SAP has sharpened its focus on cloud and cloud services after many years of being known for traditional software and licenses. SAP HANA in-memory cloud computing is the backbone of cloud initiatives. HANA has been the focus at recent SAP events and has been acting as the integration point for all of its services. It acquired corporate travel services giant Concur for $8.3 billion in September. Concur also does solid business with federal agencies worldwide.

    There has been a lot of cloud and data center activity in Australia in response to cloud first policies and Australia’s AsiaPac hub status.

    VMware is offering public cloud through a partnership with Telstra. IBM/SoftLayer recently opened a data center in Melbourne as part of a global $1.2 billion expansion. Red Cloud is undergoing a massive Australian expansion via t4 modules, adding 1 million square feet of space, and Global Switch recently completed the first phase of a $300 million Sydney data center.

    “We anticipate government ­addressing the under investment in ICT that has occurred over the last seven or eight years, and see some large programs on the horizon,” said Damien Bueno, to Australian Financial Review. “The decision to do this had its genesis in the late part of 2012, and our decision has been supported by recent government decisions regarding cloud.”

    7:49p
    Intel Flashes 10nm Nexg-Gen Xeon Phi at Year’s Big Supercomputing Show

    At this week’s SC14 in New Orleans, the big supercomputing show of the year, Intel selected a handful of details to disclose about the future of its Xeon Phi line, code-named Knights Hill, as well as new architectural details around its Omni-Path fabric interconnect technology. As the Xeon Phi series matures and more systems are deployed with it, Intel is assuring all stakeholders that there is a future for the chip that is becoming a standard high performance computing building block for some of the world’s fastest supercomputers.

    Following the upcoming 14 nanometer Knights Landing product, Intel said its third-generation Knights Hill Xeon Phi family will be built using 10 nm process technology and integrate second-generation Omni-Path Fabric technology.

    The first commercial systems based on Knights Landing are expected to ship next year and will incorporate Intel’s silicon photonics technology. This will also be the first stand-alone Phi chip in the product line which has consisted of co-processors, used to offload calculations from the main CPU.

    While sticking to its iterative release process for Xeon Phi, Intel has shown flexibility with custom Xeon orders, making special chips for new Amazon C4 instances and for Oracle database machines.

    According to the November 2014 Top500 list Intel-based systems account for 86 percent of all supercomputers and 97 percent of all new additions.

    Intel has re-branded its OmniScale interconnect technology to Omni-Path and said the new architecture was expected to offer 100 Gbps line speed and up to 56 percent lower switch fabric latency in medium-to-large clusters than InfiniBand alternatives. The Omni-Path architecture is based on a 48 port switch with 100Gbps line speed.

    To further enable Omni-Path Intel launched a Fabric Builders program to help form an ecosystem to act as a catalyst for working together on Omni-Path solutions.

    Global geosciences company DownUnder recently purchased a customized SGI Rackable HPC solution with Intel Xeon processors and 3,800 Xeon Phi co-processors. DownUnder managing director Dr. Matt Lamont said the “combination of Intel Xeon Phi co-processors with our proprietary software allows us to provide our customers with one of the most powerful geo-processing production systems to date.

    “Our Intel Xeon Phi powered solutions enable interactive processing and imaging from each of our geophysicists’ individual computers. A testing regime that once took weeks can now be achieved in days.”

    10:00p
    Microsoft Azure Outage Knocks Out Websites and Xbox Live

    logo-WHIR

    This article originally appeared at The WHIR

    Microsoft began reporting a cloud services outage on its status page around 5 pm Pacific time on Tuesday.

    “Our investigation of the alert is complete and we have determined the service is healthy,” a cloud services multiple regions advisory statement said. “A service incident did not occur for Cloud Services in multiple regions.”

    However, this was quickly followed by reports of partial performance degradations with traffic manager and multiple region service interruptions for Azure services as users began reporting more problems on Wednesday. Azure acknowledged having problems on its Twitter account at 6:30 pm on Tuesday. The Twitter account reported the outage was resolved by 10:56 pm, yet its status page continued to report problems hours after.

    Thousands of sites using Azure as a web host were down for hours including Microsoft’s own msn.com and Windows Store. There was also a storage outage in Western Europe. The Azure status pages says customers affected can get more information thought the management portal.

    Azure had other outages this year including several in August which coincided with the release of new Office 365 features. Azure was also experiencing an outage when promoting the online gaming features of Xbox One launched in November last year.

    This outage coincided with Office 365 and Xbox announcements as well. The Minecraft: Xbox One Edition was released on Tuesday for the holiday season. Minecraft was running on IBM SoftLayer infrastructure but was purchased by Microsoft in September for $2.5 billion dollars in an effort to lure a younger crowd to its products. Microsoft also announced the new video feature of Office 365 on Tuesday.

    Xbox Live users were offline for the second time this month, Xbox Live runs on Azure. According to a report by TheNextWeb, the Xbox Live support page said earlier that “‘social and gaming are limited’ and that a number of functions including matchmaking, party and chat are currently unavailable.”

    The Xbox support Twitter account, “Guinness World Record Holder: Most Responsive Brand on Twitter,” only had two tweets during the time of the outage. One acknowledged the problem and one reporting that it was fixed. Support was responding to user complaint tweets.

    With competing services such as Amazon, Google and Centurylink cutting prices, Azure needs to stay on its toes. Several industry experts believe that cloud prices have reached a point where they are simply a commodity.

    It’s important for providers hoping to survive in the increasingly competitive cloud space to compete on service. Outages may be inevitable but certainly don’t help the case for customers looking for the most reliable service provider.

    For example, the BBC reported the outage as “hugely disruptive” to customers. Not only did it affect some of Microsoft’s large customers such as Toyota, Boeing and eBay, but also affected smaller customers such as Surrey-based company SocialSafe.

    “It’s hugely disruptive. There’s obviously an adverse impact when your whole website goes down – that’s where people expect to download and access our service,” SocialSafe’s founder Julian Ranger told the BBC.

    “We switched to Azure because the previous provider did occasionally have outages and obviously you want your site and the supporting software, which is hosted on servers behind it, to always be operating. The point about Azure was that they guarantee that your site will always be up because there are multiple places, effectively, where your software can run. If there’s one problem, it should happily switch to run elsewhere,” Ranger said. “And that’s just not happening today – we’re completely out.”

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/microsoft-azure-outage-knocks-websites-xbox-live

    10:30p
    Fasthosts Shared Hosting Customers Face Downtime as DDoS Attack, Windows 2003 Vulnerability Shake Platform

    logo-WHIR

    This article originally appeared at The WHIR

    A denial of service attack caused customers of Fasthosts shared hosting services to experience intermittent website downtime on Monday. According to a report by The Register, the downtime was a result of a loss of DNS performance due to the DDoS attack.

    The Register said that the outages impacted customers for around five hours on Monday morning.

    According to an emailed statement from Fasthosts obtained by The Register, the company also identified a separate issue, a vulnerability to part of its Windows 2003 shared web server platform.

    “The small affected proportion of our large hosting platform was immediately isolated, and work is being undertaken to investigate and fix the issue as swiftly as possible,” Fasthosts said in the email.

    As a “precautionary measure” Fasthosts took some shared hosting servers offline on this platform.

    On its help page, Fasthosts said that when a new vulnerability is found in the Windows 2003 shared hosting platform it usually takes one of three steps: applies an update in its next maintenance window, immediately mitigates the vulnerability and applies a permanent update in the next maintenance window or immediately updates the vulnerability.

    An update this morning said that the vulnerability “has been fully understood” and its servers are in the process of coming back online after being updated. “So far progress is going well and 50 percent of the platform is back online,” Fasthosts said.

    On its Facebook page, Fasthosts said it expects the issue to be resolved fully “later this evening.”

    Fasthosts provided customer updates via its Facebook page as well as its system status page on its website, and apologized to customers about the downtime on Twitter.

    In terms of compensation, Fasthosts told one affected customer on Twitter that compensation can be discussed “once the issue has been resolved so [Fasthosts] have timescales.”

    At the beginning of the year, customers of Fasthosts were offline for up to seven hours after a utility power outage triggered connectivity problems.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/fasthosts-shared-hosting-customers-face-downtime-ddos-attack-windows-2003-vulnerability-shake-platform

    10:35p
    AWS Commits to 100% Renewable as Google Signs Big European Wind Deal

    It’s been a big week for carbon neutral data center energy, both Google and Amazon taking big steps to power their data centers with clean power.

    Google announced that its Eemshaven, Netherlands, data center will be powered by 100 percent renewable energy when it comes online, its first data center to do so on day one. Amazon quietly committed to going 100 percent renewable. Its cloud business Amazon Web Services has been a frequent target of criticism by Greenpeace, which has accused it of using too much coal power and not being transparent about the fuel mix that powers its cloud.

    Google signed a power purchase agreement for a 63 megawatt wind farm that will power its Netherlands data center planned for 2016. The new deal is a 10-year agreement for the entire output of a new 18-turbine onshore-offshore wind farm being built by Eneco at Delfzijl. The wind farm will be 20 kilometers away from the data center, close enough to see on a clear day.

    PPAs: Google’s Green Weapon of Choice

    The PPA means Google has now signed agreements for over 1 gigawatt of wind across its footprint. The company announced it hit 1 gigawatt of total renewable energy on Earth Day.

    Such PPAs are a common instrument Google uses to get carbon neutral data center energy. A long-term PPA finances the construction project, which then generates energy Google sells to the local grid. The company keeps the renewable energy credits associated with that energy and applies them to offset the carbon associated with grid electricity it buys for its data centers.

    The new Dutch data center is expected to cost €600 million (about $750 million).

    When the project was first confirmed, spokesman Mark Jansen told Reuters the site was chosen because of stable Dutch energy supply. A big reason for building the data center at Eemshaven was the choice of renewable energy options in the area.

    “By entering into long-term agreements like this one with wind farm developers, we’ve been able to increase the amount of renewable energy we consume while helping enable the construction of new renewable energy facilities,” wrote Francois Sterin, director of Google’s global infrastructure team.

    This is the third PPA Google has signed in Europe in the last 18 months. The other two were with wind farm developers in Sweden to power the data center in Hamina, Finland. A deal in 2013 was followed by the company buying the total output of four new wind farms in January. Google’s Hamina data center also uses seawater for cooling. Investment there has topped $1 billion.

    “Google has been fairly consistent [in terms of signing PPAs],” said Gary Cook, senior analyst at Greenpeace. “They’ve been signing long term contracts which is bankable for renewable energy players. Microsoft is doing the same. We’d really like to see Amazon follow suit.”

    Amazon Still Quiet on Data Center Energy

    Amazon has been Greenpeace’s target of choice as laggard in renewable energy initiatives. However, the company quietly announced that it was committing to 100 percent renewable across its global data center footprint.

    “Amazon Web Services’ new commitment to power its operations with 100 percent renewable energy represents a potential breakthrough toward building a green Internet,” said Cook. However, Cook believes Amazon needs to be more transparent about how it hopes to achieve this.

    Amazon’s customers will need more information to be sure that AWS means business about renewable energy,” he added. “AWS should offer a plan for how it will implement its ambitious new commitment across its footprint. Apple, Facebook, and Google, three of Amazon’s peers and rivals, all have laid out road maps that explain how they intend to achieve their goals of procuring 100% renewable energy.”

    Amazon recently announced it was opening a 100 percent carbon neutral AWS cloud region in Frankfurt, Germany. “It was interesting when Amazon announced Frankfurt and said was 100 percent carbon neutral,” said Cook. “James Hamilton, at the recent event, said it was now their third carbon neutral region, but they don’t have any details on how to make it happen.”

    This roadmap is important in evaluating the true impact. A long-term PPA with a wind farm has a positive impact on the renewable energy industry, while some other ways to get to carbon neutrality — such as renewable energy credits credits — aren’t as impactful.

    We reached out to Amazon for comment, but a company spokesperson said the company didn’t have more information to offer, but that its sustainable energy commitment statement may be updated periodically. So far, the company has made the general commitment and is emphasizing hardware, software, and operational efficiencies it achieves at scale. AWS said it uses rack-optimized systems that use less than one-eighth the energy of blade enclosures commonly used in corporate data centers.

    The three regions the company says are currently carbon neutral are US West (Oregon), EU (Frankfurt), and AWS GovCloud (Northwest).

    Apple Reverses Past Dirty-Cloud Reputation

    Another giant that has made a lot of progress in clean data center energy is Apple, which hit 100 percent renewable energy in its data centers in 2013, and has since dramatically increased efforts. It has resulted in a very positive response from customers. “It’s worked quite well for Apple,” Cook said.

    Apple and Google recently lobbied Duke Energy for renewable energy in North Carolina, resulting in a $500 million investment by the utility. This was a big turnaround for Apple, a company Greenpeace used to call out as a company that operates a cloud that’s ‘dirty and dangerous‘ in 2011.

    “Apple showed the industry that it’s better to get out in front of this,” said Cook. “Groups like us are looking at them. Customers have high expectations.”

    This is Google’s first data center company-owned and company-built in the area, but it has operated a data center in Eemshaven owned by TCN for over six years. The company has three other large European data centers: in Ireland, Finland, and Belgium.

    The Eemshaven facility will span 44 hectares. Employment in the area will get a boost, with Google expected to create 150 jobs.

    Google isn’t the only tech giant attracted to the Netherlands’ power profile. Apple is said to be considering a data center in the area for similar reasons.

    Google has made several bulk purchases of wind power in the U.S. as well. It mostly deals with wind farm developers, but in 2012 it began working with utilities on wind power for the first time, using its sway to convince them to invest in renewables. The announcement that it had invested in 1GW of renewable energy generation capacity came after a blockbuster deal with MidAmerican Energy.

    11:00p
    Report: Qualcomm Building ARM Server Business

    Qualcomm, the San Diego, California-based semiconductor vendor that dominates the global smartphone chip market, is getting into the server processor business.

    The company licenses chip architecture from U.K.’s ARM Holdings. Most of the world’s smartphones are powered by chips built using ARM designs, and Qualcomm is the biggest supplier of these chips.

    Because they use less power than x86 processors commonly used in servers ARM chips have become an attractive proposition for the server market, since data center energy use has been an issue of growing importance in the past several years. This interest has spurred an ecosystem of startups and old-guard companies (such as AMD) around ARM chips for servers.

    Earlier this year, a company called Applied Micro started shipping the world’s first 64-bit ARM server processor. AMD and Texas Instruments are among other chipmakers with 64-bit ARM parts for servers in the works.

    HP started shipping the world’s first off-the-shelf ARM-powered servers in September. A French web hosting company called Online.net Labs (an Iliad subsidiary) launched an Infrastructure-as-a-Service cloud built on ARM servers it designed in house in November.

    Qualcomm CEO Steve Mollenkompf announced the company’s server-market ambitions in a meeting with analysts in New York this week, Dow Jones reported.

    “We are engaged with customers,” the report quoted him as saying about Qualcomm’s nascent server business. “It will take us a while to build this business, but we think it is an interesting opportunity going forward.”

    Mollenkompf hinted that the company was well positioned to tackle the server market at CES in Las Vegas in January, Reuters reported.

    << Previous Day 2014/11/19
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org