Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, March 16th, 2017

    Time Event
    12:00p
    Atlanta Data Center Market Firing on All Cylinders

    A flurry of recent activity underscores the importance of Atlanta as an emerging hub for data centers in the Southeast.

    The demand for data center space leased from third-party landlords appears to be growing across all segments there, including: colocation and interconnection, enterprise outsourcing, cloud and hybrid IT solutions.

    Illustrating strength of the Atlanta data center market is recent acquisition and expansion activity there. Earlier this month, Ascent announced acquisition of two enterprise data centers, one of them a BlackBerry facility in the Atlanta market. After the closing bell this past Wednesday, Digital Realty Trust announced it was expanding its Telx business in the Atlanta metro, where strong demand for colo space put its 160,000-square foot interconnection hub at 56 Marietta St. at full capacity.

    Connectivity Demand is High

    The carrier hotel was a crown jewel in Digital Realty’s Telx acquisition in 2015. Today, even the building’s sub-basement is leased to customers, John Stewart, a senior VP at Digital Realty, said in an interview with Data Center Knowledge. Atlanta is now the third-largest colocation and interconnection market for Digital, trailing only New York and Chicago, where the San Francisco-based data center REIT also owns key connectivity hubs.

    The existing 300-plus customer ecosystem at 56 Marietta includes 150 network, telecom, and cloud providers, and over 11,000 cross-connects. Stewart explained that customer demand is so strong, Digital decided to execute a long-term lease at 250 Williams St. as the fastest way to provide another 18,000 square feet of raised floor. This is an unusual move for the company, which owns almost all its other 140-plus data centers worldwide, illustrating how urgent the need for data center capacity in the market is.

    This new facility is located about 0.6 miles from 56 Marietta and tethered to the carrier hotel’s network meet-me rooms by high-speed fiber. The new location is essentially functioning as an annex which does not require the use of Metro Connect, Digital Realty’s connectivity infrastructure that links its facilities in single metros. Customers will be able to self-provision crossconnects as if they were 56 Marietta tenants, utilizing Digital’s MarketplacePORTAL and Service Exchange software networking tools.

    250 Williams St. in Atlanta (Photo: Cousins Properties)

    Office REIT Cousins Properties owns the American Cancer Society building located at 250 Williams St. The building is just under 1 million square feet, and is designed for up to a 24MW critical load. It sits atop Georgia Power’s Fowler Underground Grid System, which assures an uninterrupted supply of power, according to Cousins.

    Notably, the building housed communications infrastructure for the 1996 Summer Olympics, and its floors can handle 150 pounds per square foot. Digital has an option to take down a second chunk of space of similar size from Cousins. This is important, because Stewart believes the first phase at 250 Williams St. will be full by the end of 2018.

    Ascent Enters Atlanta via Sale-Leaseback

    Earlier this month, Ascent Corp. announced that it had entered into a sale-leaseback arrangement with a “global technology company” for enterprise data centers in both the Atlanta and Toronto markets. Subsequently, a news report by Appen Media Group confirmed the ATL1 data center, purchased for $30 million, was located on a 40-acre site formerly owned by BlackBerry.

    Ascent and BlackBerry inked a partial lease-back for the 8.1MW facility in Atlanta, which is expandable to 14MW. The Toronto campus will have up to 4.8MW of critical power in TOR1 available by Q2 2017, and this campus could eventually be scaled up to 75MW of critical load.

    “Our ATL1 facility is a true enterprise-class data center,” Ascent CEO Phil Horstmann told us via email, in which he also touted the robustness of the facility’s electrical and mechanical infrastructure, as well as its access to fiber and severe weather-resistant structure.

    The two Ascent sale-leaseback deals are in partnership with TowerBrook Capital Partners. This arrangement allows the customer to focus on its core business while allowing Ascent to offer other enterprise customers “plug-and-play” data center space for immediate occupancy and potential for future expansion.

    QTS – Hybrid IT Demand

    In addition to being a home to corporate and regional headquarters, metro-Atlanta is also home to over 13,000 technology companies. This is one of the reasons why QTS Realty has a large footprint in this market, with mega-data center campuses in Suwanee and metro-Atlanta.

    The company’s former CIO, Jeff Berson, was recently elevated to the CFO position. “A business-friendly posture, plus the well-educated work force, including a large engineering talent pool, and fiber connectivity, help make Atlanta a great market for data centers,” he said in an interview.

    In addition to being the “gateway to the Southeast,” the region’s low cost of power makes it attractive for hyper-scale customers, he noted. Power can be as low as $0.045 per kWh, plus taxes and fees, according to Berson.

    Meanwhile, bread-and-butter QTS clients in Atlanta continue to be customers interested in deploying a hybrid cloud IT stack. Enterprise verticals that are particularly active in the market include health care, financial services, and retail, in addition to cloud, technology, and IT services. There is also a considerable federal government presence in Atlanta, but government demand can be lumpy and hard to predict, Berson added.

    The other main attraction mentioned by both Berson and Horstmann is what Atlanta doesn’t have — a propensity for hurricanes and other natural disasters. While it is still early in the year, it looks like data center leasing activity is heating up for incumbents in 2017, as well as Ascent, the latest player to call Atlanta home.

    12:00p
    Digital Realty Expands Atlanta Data Center Space

    Digital Realty Trust announced addition of an 18,000-square foot, $22 million Atlanta data center that will be connected via high-speed fiber to its massive carrier hotel located a little more than a half-mile away.

    The fact that its 160,000-square foot 56 Marietta St. site has “no vacancies” coupled with continued demand prompted the multi-tenant data center provider to expand to the new downtown Atlanta site at 250 Williams St.

    Although Digital will lease 50,000 gross square feet in the American Cancer Society building, as it is known, less than 30 percent of it will be used as raised-floor space for servers and computers.

    See also: Atlanta Data Center Market Firing on All Cylinders

    Digital Realty Vice President John Stewart told Data Center Knowledge that he expects the new space to be at capacity by the end of 2018.

    Once headquarters for the 1996 Olympics, the Williams St. building went through a major expansion in 2012, making it the largest multi-tenant data center in the Southeast.

    Called the Gateway to the Southeast, Atlanta is the third largest market for Digital, which owns, operates and manages server farms in 33 markets.

    Digital, traditionally a wholesale data center provider, made retail colocation a major focus after it acquired the 56 Marietta site as part of its Telx acquisition in 2015.

    Today, the $19.9 billion company says it owns more than 150 data centers spanning 26 million square feet, with 119 in North America, 30 in Europe, and seven in the Asia Pacific region.

    3:00p
    Idaho One Step Closer to Passing Data Center Tax Breaks

    Idaho just moved one step closer to becoming the 21st state to offer tax incentives for existing data centers that want to expand and newcomers looking to build from scratch.

    House members approved the Idaho Department of Commerce-backed measure 35-34 today, so it’s now headed to the Idaho Senate for final approval.

    Known as the Technology Equipment Tax Rebate Bill, it would provide rebates on equipment for new builds and refurbishments. What might be even more enticing for companies considering Idaho is that incentives can also be applied to the replacement of old equipment as it nears the end of its lifecycle—generally every three to four years.

    In order to qualify for the data center tax breaks, companies would have to adhere to strict criteria set by the Department of Commerce to ensure that Idaho benefits economically as well. For example, new data centers must make at least a $25 million capital investment, and create 20 jobs that pay above the county’s average pay within two years of operation. After these requirements are met, they’ll be considered an existing data center.

    Existing data centers looking to replace old equipment or expand a facility must invest $5 million in eligible server equipment in a year to qualify for 100 percent sales tax rebate on it.

    State officials estimate that the bill would take $531,000 per year out of the state’s general fund, but if data centers comply with the addition of jobs and increased spending, that “loss” would pale in comparison to the “wins.”

    According to an article in Greenbelt Magazine, one independent contractor recently estimated that the development of a data center in Idaho could generate up to 1,100 jobs, half a million labor hours, and nearly $16 million in payroll.

    Idaho will have to stick to its guns, though, and avoid what happened to Oregon back in 2015. With similar qualifications for rebates in the booming Hillsboro area, some data centers fell far short of expectations. According to the Oregon Department of Revenue, Infomart Portland saved $775,000 on property taxes in 2013 and 2014, but only employed one full-time person. Other data centers taking advantage of the incentives were reported to be employing fewer than two.

    With just seven existing data centers, Idaho is hardly a mecca like North Virginia and North Texas have become. That also means that unlike those areas where land is being plucked left and right by the Googles and Microsofts, there’s plenty left in the Gem State. Service providers already calling Idaho home include DataSite, Involta and the FBI.

    Given the state’s pending legislation, and the fact that Idaho is ranked by Sperling’s as one of the safest locations in the country for weather and other disasters (when was the last time you heard about tornadoes, hurricanes or earthquakes striking Boise?) companies are likely to take a long look at locating there.

    If passed, the bill wouldn’t expire until 2024.

    3:30p
    Oracle’s Cloud Business Shows Momentum as Sales, Profit Beat

    Brian Womack (Bloomberg) — Oracle Corp. posted third-quarter revenue and profit that topped analysts’ estimates, signaling growing demand for the software maker’s cloud-based services that compete with Amazon.com Inc. and Salesforce.com Inc.

    Profit before certain items was 69 cents a share, compared with an average estimate of 62 cents. Adjusted sales rose 2.9 percent to $9.27 billion in the period that ended Feb. 28, the Redwood City, California-based company said Wednesday in a statement. On average, analysts had projected $9.26 billion, according to data compiled by Bloomberg.

    The report marked three straight quarters of revenue gains after more than a year of declines. Oracle has been adding products and pushing customers toward its cloud-based business software and services, which offer computing and storage power from remote sites. Oracle’s infrastructure offering, a product that goes head-to-head with Amazon Web Services, will eventually be the software company’s biggest cloud business, Executive Chairman Larry Ellison said.

    See alsoOracle’s Cloud, Built by Former AWS, Microsoft Engineers, Comes Online

    “These results show a nice upward inflection in the overall business as new cloud revenues are more than offsetting the declines in software license sales,” Rodney Nelson, an analyst at Morningstar, said via email. That performance and Ellison’s comments may “be fueling some additional optimism around the transition,” he said.

    Overall, sales from Oracle’s cloud businesses gained 62 percent in the recent period. New software licenses, a measure that’s tied to Oracle’s traditional on-premise software business, declined 16 percent to $1.41 billion — smaller than the drop of 20 percent posted in the fiscal second quarter.

    Shares Jump

    Oracle’s shares rose as much as 5.6 percent in extended trading. The stock had climbed less than 1 percent to $43.05 at the close in New York.

    Net income in the recent quarter rose 4.5 percent to $2.24 billion, Oracle said. The company also raised its quarterly cash dividend to 19 cents a share, up from 15 cents.

    On a conference call after the report, Ellison said to expect some large deals for customers moving databases to the “infrastructure-as-a-service” business, the company’s Amazon competitor that provides core computing power and storage. Ellison said his service has technological advantages and he expects big things from the lineup.

    “We are now in position to help our hundreds of thousands of database customers move millions of Oracle databases to our infrastructure-as-a-service cloud,” Ellison said. “And before long, infrastructure-as-a-service will become Oracle’s largest cloud business.”

    Chief Executive Officer Safra Catz also expressed optimism about the cloud. She said that piece of the business should grow 25 percent to 29 percent in the current quarter, on an adjusted basis and measured in constant currency. The other part of Oracle’s cloud business, which includes applications for human resources and finances, should grow 69 percent to 73 percent, she said on the call.

    Overall, she expects fourth-quarter adjusted sales to range from a decline of 1 percent to a gain of 2 percent, based on constant currency. Adjusted earnings are forecast to be 78 cents to 82 cents a share.

    NetSuite Deal

    The third quarter was the first full period since the company acquired NetSuite Inc., a provider of cloud-based financial services, for $9.3 billion, one of its largest-ever deals. NetSuite is one of the biggest pure providers of these modern software features, having carved out a leadership position in the market for tools that manage customers’ core financials. Adjusted sales in the part of the cloud business that includes NetSuite rose 85 percent.

    “It appears that business activity was solid, particularly for cloud,” John DiFucci, an analyst at Jefferies LLC, said in a research note before the results were released. Oracle’s “approach to transitioning its business to cloud has taken a more healthy form.”

    4:00p
    Supercharge Your Existing Storage with a Metadata Engine

    David Flynn is Data Chief Technology Officer and Co-founder of Primary.

    New storage technologies like NVMe flash and cloud storage are helping enterprises keep up with explosive data growth and new ways to use old data, including business analytics and other intelligence applications. The trouble is that despite the diverse capabilities of storage systems across performance, price and protection, the rigidity of traditional storage and compute architectures mean that there has been no way to make sure the right resource is serving the right data at the right time. Each new type of storage becomes a silo that traps data and increases data center costs and complexity.

    To overcome this inefficiency, storage resources need to be intelligently connected to automate the placement and movement of data and maximize performance while saving significantly on costs. This can be done now by adding a metadata engine into your architecture to abstract data from underlying storage hardware to virtualize it, and then unifying storage resources and capabilities through a single global data space. Data can then be moved automatically to the ideal storage for data requirements, without application interruption. Let’s examine how this can improve application service levels, increase storage utilization to slow storage sprawl, and reduce storage overprovisioning to reduce costs.

    Improve Service Levels by Using the Right Storage for the Right Job

    Traditionally, due to the cost and complexity of data migration, most data typically stays where it is first written until it is retired. As a result, IT typically purchases storage based on the highest projected estimates for application service requirements. While this approach enables IT to ensure it will meet Service Level Agreements (SLAs), it creates significant waste in the data center, and this is straining IT budgets. Even with this excessive over purchasing of storage, estimates can be wrong, and many admins have stories about nightmare data migration fire drills.

    Automating the placement and movement of data with a metadata engine improves service levels, as follows:

    • Data-aware, objective-based management. Traditionally, IT takes a bottom up approach to meeting application service level objectives, assigning storage resources based on expected application needs. Intelligent data management enables IT to focus on data needs, aligning data with the capabilities that a specific storage device can provide. A metadata engine can automatically provision data to the ideal storage for its performance and protection requirements, and move data dynamically and non-disruptively if the environment changes to ensure that service levels are always met. For example, if a noisy neighbor starts consuming storage resources, workloads can be rebalanced without manual intervention, and without disrupting application access to data.
    • Global visibility into client file and storage performance. An enterprise metadata engine can gather telemetry on clients, enabling IT to see which applications are generating the workload and the performance each client receives. It can also gather performance information across all storage resources integrated into the global dataspace. This enables smart, real-time decisions to be made so data can be placed to meet SLOs. In addition, workloads can be charted historically, helping IT implement more effective policies, as well as proactively moving data to storage, as needed. For example, software might identify that financial data sees higher activity at the end of the quarter and proactively move the associated data to faster storage, and then back to more cost-effective storage once quarterly reporting is completed.
    • Simpler, faster architecture. Once data is freed from individual storage containers, it becomes possible to move control (management) operations to dedicated management servers, accelerating management operations, while freeing the data path from the congestion of metadata activity. In addition, managing data through a metadata engine enables applications to access data in parallel across multiple storage systems.
    • Easily integrate new storage technologies. To help enterprises overcome vendor lock-in and save on costs, any solution to automate the placement and movement of data should ideally be vendor and protocol agnostic. This makes it possible for companies to easily integrate NVMe in servers, cloud providers, such as Swift and S3, and the next advance in storage without the need to rip, replace, and upgrade storage.

    Increase Storage Utilization to Reduce Storage Sprawl and Costs

    To avoid the cost and complexity of data migrations, IT over-provisions storage for every application in the data center. This leads to and expensive waste. Automating the placement and movement of data across storage resources with a metadata engine enables enterprises to greatly increase storage utilization, to significantly slow storage sprawl and reduce costs, as follows:

    • Global visibility into resource consumption. With a metadata engine that virtualizes data, admins can see available performance and capacity, in aggregate and by individual storage system. This ensures they know exactly when they need to purchase more storage, and of what type they need to purchase to meet data’s needs.
    • Automatically place data on the ideal storage. A metadata engine can automatically move infrequently accessed data to more cost-effective storage, including the cloud. By reclaiming capacity on more expensive storage resources, this significantly extends the life of existing investments and greatly slow storage sprawl.
    • Scale out any storage type, as needed. A metadata engine that is vendor and protocol agnostic views storage in the global namespace as resource pools with specific attributes across performance, price and protection. Workloads can then be automatically provisioned and rebalanced workloads as new resources are added. This makes adding performance and capacity of any storage type a simple deployment process, instead of a complex, planned migration or upgrade. As a result, enterprises can defer new storage purchases until they really need it, and purchase exactly the storage they need. This has the additional benefit of making it possible to easily introduce the latest storage technologies into the datacenter for business agility.

    Integrating a metadata engine into enterprise architectures improves service levels with a data-aware approach to management, smarter and automated data placement decisions, and simpler, faster architectures. It also enables enterprises to use storage much more efficiently to reduce storage hardware, software, and maintenance costs. With these advanced capabilities, enterprises are able to cut over-provisioning costs by up to 50 percent, and those savings easily run into the millions when you are managing petabytes of data.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    4:57p
    AWS Offers Cloud Credits to Alexa Skill Developers

    Brought to You by Talkin’ Cloud

    Amazon Web Services is using its cloud dominance to encourage developers to build Alexa skills, launching a new program on Wednesday that offers cloud credits to developers with a published Alexa skill. Alexa Skills can be thought of as “virtual apps” that help users extend the power of the virtual assistant.

    AWS said that many Alexa skill developers use its free tier, which offers a limited amount of Amazon EC2 compute power and AWS Lambda requests for no charge. But if developers go over these limits, they will incur cloud charges.

    With its new offering, developers with a published Alexa skill can apply to receive a $100 AWS promotional credit as well as an additional $100 per month in credits if they incur AWS usage charges for their skill.

    In November at its partner conference, AWS’ director of its worldwide partner ecosystem Terry Wise said that AWS had heard a “deep desire to integrate Alexa and voice capabilities” into services offered through the partner network, according to ZDNet. To that end, AWS announced the Alexa Service Delivery Program for Partners, which gives “companies access to tools, training, solution blueprints, and support for the assistant, which is best known for helping customers using Amazon’s Echo line of connected speakers,” according to PC World.

    As of January, there were more than 7,000 custom Alexa skills built by third-party developers. There are all kinds of things Alexa lets you do – including those Alexa skills that enhance your productivity, locate your missing keys, and make a cup of coffee. [Check out more ways to use Alexa on our sister site, Supersite for Windows.]

    “There is already a large community of incredibly engaged developers building skills for Alexa,” Steve Rabuchin, Vice President, Amazon Alexa said in a statement. “Today, we’re excited to announce a new program that will free up developers to create more robust and unique skills that can take advantage of AWS services. We can’t wait to see what developers create for Alexa.”

    Alexa developers can apply to see if they qualify for the AWS cloud credits online. The first promotional credits will be sent out in April.

    If an Alexa developers’ skill surpasses the free usage tier, they may be eligible to continue receiving promotional credits.

    “Each month you have an AWS usage charge you’ll receive another $100 AWS promotional credit to be used toward your skill during the following month,” AWS says.

    This article originally appeared on Talkin’ Cloud.

    7:37p
    Google Opens a New Front in Cloud Price Wars

    Over the past three years Google spent roughly $10 billion a year on data centers so you don’t have to.

    By constantly growing the scale of their data center infrastructure, cloud giants are able to continue lowering prices of their services as they battle for share of the enterprise cloud market. At the Google Cloud Next conference earlier this month in San Francisco Google launched the latest attack in this battle, which at least for now has a good shot at being successful.

    In addition to the modest five-to-eight-percent price drop on cloud VMs (although at large enough scale it stops being modest), the company announced new so-called Committed Use Discounts, which can slash the cost of its cloud infrastructure services by up to 55 percent in exchange for a one-to-three-year commitment by the customer.

    Importantly, these new discounts for long-term users mean Google’s cloud may now be cheaper for those users than Amazon Web Services, the leader in the space, whose compute capacity accounts for 90 percent of the world’s processor cores running virtual machines for customers, also known as Infrastructure-as-a-Service.

    That’s according to Michael Warrilow, VP at the market research firm Gartner. Most of the remaining 10 percent of compute cores are in Microsoft’s cloud data centers, Warrilow said, with Google splitting whatever’s left with a few other players. “But it’s not going to be hard for [Google] to turn up,” he added. “The demand is there.”

    And the new discounts are likely to help Google turn up, unless of course AWS and Microsoft Azure don’t promptly respond with new price cuts and discounts of their own, which is very likely, considering the history of cloud price wars.

    See also: Can Google Lure More Enterprises Inside Its Data Centers?

    Elasticity vs Cost: the Cloud’s Big Trade-Off

    Amazon has had similar discounts for long-term upfront commitments for some time now. In fact, that was the only way an AWS user could get competitive pricing from the world’s largest cloud provider, Warrilow said. The user has to pay for three years upfront, and the agreement will specify the type of cloud VMs they will use for those three years. That’s where Google decided to make its move.

    The Alphabet subsidiary is not asking for an upfront payment, and it’s not asking you to specify what kind of cloud VMs you’ll be using for those three years. You buy compute cores and memory in bulk, Urs Hölzle, Google’s senior VP of technical infrastructure, said from stage at Next. And you get a big discount for buying in bulk and signing a one-year or a three-year contract. How you use those bulk resources during that time is up to you. You can change machine size at any moment, depending on your needs. “You’re only committing to the aggregate volume and not the details,” Hölzle said.

    Urs Holzle, Senior Vice President for Technical Infrastructure at Google, speaks during the Google I/O 2014 conference in San Francisco. (Photo by Stephen Lam/Getty Images)

    AWS added some configuration flexibility to its Reserved Instances last September, letting users switch the family of cloud VMs they use but only if the VMs they switch to have the same price. This category is called Convertible RIs, but that cost-equivalency requirement takes away from convertibility.

    Models like Amazon’s (he didn’t name Amazon, but the reference was clear) “force you to predict the future perfectly,” Hölzle said. “Like on-premise, we’re forced to buy servers in fixed sizes.” That approach diminishes the cloud’s promise of infrastructure elasticity in exchange for lower cost.

    According to Warrilow, the step will be a good one for Google, who said separating the ability to get a discount on cloud usage from how specifically you’re using it “is a competitive differentiator for them.”

    Analysis Shows Google Can Win on Price

    Elasticity aside, the size of the discounts Google is offering for these bulk cloud purchases may put Google ahead of Amazon on cost alone. The discount for a one-year commitment is 37 percent, and the discount for a three-year commitment is 55 percent.

    RightScale, a cloud management company that does a lot of research about the way companies use cloud services, decided to measure what that means in terms of actual cost, and how it compares to the cost of AWS. RightScale used cloud usage history of a real-world company (whom it did not name), to make the comparison and found that Google’s cloud would be 28 percent cheaper for that company than AWS Reserved Instances with a one-year commitment and 35 percent cheaper than AWS Convertible RIs with a three-year commitment:

    Chart: RightScale

    Google’s “committed-use discounts will be well-received,” Warrilow said. While the company still has a long way to go to catch up to its competitors in terms of making itself more “enterprise-friendly,” it’s taking all the right steps, such as expanding its partner program and beefing up its database of large customer use cases, he said. “There’s still good money to be made in all this, even with the price battles that have gone on.”

    See also: Google Expands Cloud Data Center Plans, Asserts Hardware, Connectivity Leadership

    8:45p
    Google Data Center FAQ

    Google is the largest, most-used search engine in the world, with a global market share that has held steady at about 90 percent since Google Search launched in 1997 as Backrub. In 2017, Google became the most valuable brand in the world, topping Apple, according to the Brand Finance Global 500 report. Google’s position is due mainly to its core business as a search engine and its ability to transform users into payers via advertising.

    About 32 percent of Google visitors come from the US, where the company holds 63.9 percent of the search engine market, according to statista.com. Google had 247 million unique US users in November 2015. Globally, it boasts 1.5 billion search engine users and more than 1 billion users of Gmail.

    Google data centers process an average of 40 million searches per second, resulting in 3.5 billion searches per day and 1.2 trillion searches per year, Internet Live Stats reports. That’s up from 795.2 million searcher per year in 1999, one year after Google was launched.


    Work in the data center industry or simply curious about what the Internet is made of? Follow Data Center Knowledge on Twitter or Facebook,  join our LinkedIn Group, or subscribe to our RSS feed and daily e-mail updates.


    In a reorganization in October 2015, Google became a subsidiary of a new company it created called Alphabet. Since then, several projects have been canceled or scaled back, including the halt of further rollout of Google Fiber. Following the reorg, however, Google has placed a lot of focus (and dedicated a lot of resources) to selling cloud services to enterprises, going head-to-head against the market giant Amazon Web Services and the second-largest player in the space, Microsoft Azure.

    That has meant a major expansion of Google data centers specifically to support those cloud services. At the Google Cloud Next conference in San Francisco in March 2017, the company’s execs revealed that it spent nearly $30 billion on data centers over the preceding three years. While the company already has what is probably the world’s largest cloud, it was not built to support enterprise cloud services. To do that, the company needs to have data centers in more locations, and that’s what it has been doing, adding new locations to support cloud services and adding cloud data center capacity wherever it makes sense in existing locations.

    The largest of several murals illustrator Fuchsia MacAree painted on the walls of Google’s data center in Dublin (Photo: Google)

    Here are some of the most frequently asked questions about Google data centers and our best stab at answering them:

    Where are Google Data Centers Located?

    Google lists eight data center locations in the U.S., one in South America, four in Europe and two in Asia. Its cloud sites, however, are expanding, and Google’s cloud map shows many points of presence worldwide. The company also has many caching sites in colocation facilities throughout the world, whose locations it does not share.

    This far-flung network is necessary not only to support operations than run 24/7, but to meet specific regulations (like the EU’s privacy regulations) of certain regions and to ensure business continuity in the face of risks like natural disasters.

    Google data centers in the Dalles, Oregon, 2006 (Photo by Craig Mitchelldyer/Getty Images)

    In the works as of March 2017, are Google data centers for cloud services in California, Canada, The Netherlands, Northern Virginia, São Paulo, London, Finland, Frankfurt, Mumbai, Singapore, and Sydney.

    Here are the data center sites listed by Google:

    North America:

    South America

    Asia

    Europe

    How Big are Google Data Centers?

    A paper presented during the IEEE 802.3bs Task Force in May 2014 estimates the size of five of Google’s US facilities as:

    • Pryor Creek (Mayes County), Oklahoma , 980,000 square feet
    • Lenoir, North Carolina, 337,000 square feet
    • The Dalles, Oregon, 200,000 square feet (before the 2016, 164,000 square foot expansion)
    • Council Bluff, Iowa, 200,000 square feet
    • Berkely County, South Carolina, 200,000 square feet.

    Many of these sites have multiple data center buildings, as Google prefers to build additional structures as sites expand rather than containing operations in a single massive building.

    Google itself doesn’t disclose the size of its data centers. Instead, it mentions the cost of the sites or number of employees. Sometimes, facility size slips out. For example, the announcement about the opening of The Dalles in Oregon said the initial building was 164,000 square feet. The size of subsequent expansions, however, has been kept tightly under wraps.

    Reports discussing Google’s new data center in Emeshaven, Netherlands, which opened December 2016, didn’t mention size. Instead, they said the company has contracted for the entire 62 Megawatt output of a nearby windfarm and ran 9,941 miles of computer cable within the facility. The data center employs 150 people.

    How Many Servers Does Google Have?

    There’s no official data on how many servers there are in Google data centers, but Gartner estimated in a July 2016 report that Google at the time had 2.5 million servers. This number, of course, is always changing as the company expands capacity and refreshes its hardware.

     

    1 | 2 | 3

    << Previous Day 2017/03/16
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org