Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, May 10th, 2017

    Time Event
    2:00p
    Google Launches Its First Cloud Data Centers in AWS’s Virginia Backyard

    Google has brought online its first cloud data centers in Northern Virginia, the largest data center market in the US and one of the largest in the world, as well as the location of the biggest cloud availability region of Amazon Web Services, whose business Google Cloud Platform is going after with a vengeance (and $10 billion in annual data center spend).

    While AWS remains far ahead its rivals in terms of market share, Microsoft, Google, IBM, Alibaba, and Oracle all reported faster cloud revenue growth in this year’s first quarter than did Amazon.

    Northern Virginia has seen the snowball effect of data centers’ clustering tendencies play out more than most other regions around the world. Hyper-scale cloud companies have all flocked to the region in recent years to take advantage of all the interconnection opportunities there, and data center developers have been racing to build in the region to satisfy the demand.

    See also: N. Virginia Landgrab Continues: Next Amazon Data Center Campus?

    Google launching cloud data centers in this big internet nerve center means customer applications serving Northeastern and Mid-Atlantic regions will see big latency improvements if they host their cloud apps and data there, GCP product manager Dave Stiver wrote in a blog post announcing the launch Wednesday:

    “Our performance testing shows 25%–85% reductions in RTT latency when serving customers in Washington DC, New York, Boston, Montreal, and Toronto compared to using our Iowa or South Carolina regions.”

    The Alphabet subsidiary has been spending billions on new cloud data centers to support its enterprise business as it races to bulk up the list of physical infrastructure locations it can offer customers. Urs Hölzle, the company’s senior VP for technical infrastructure, said in March that it had been investing about $10 billion in capital annually on this infrastructure push.

    See also: Cloud Giants Disagree on Future of Corporate Data Centers

    Google launched three availability zones in Northern Virginia, which means three data centers, each with its own physical infrastructure (although not necessarily in three separate buildings). Google usually launches multiple data centers within a region, each hosting a separate availability zone for intra-regional redundancy. Some regions launch with two zones, but the goal is to eventually have at least three per region.

    Northern Virginia is Google’s fourth availability region in the US and seventh worldwide. The company has announced plans to launch cloud data centers in Montreal, California, and São Paulo, as well as four European regions, and three in Asia Pacific.

    Find everything you wanted to know about Google data centers in our Google Data Center FAQ.

    3:00p
    Crews Break Ground on Fourth Facebook Data Center in Iowa
    Facebook announced that it has broken ground on the fourth building of its data center campus in Altoona, Iowa. Once completed, total square footage on the 400-acre site will reach 2.5 million.


    According to a company blog post, this is the largest of nine Facebook data center construction sites, with the third structure still being built. Like the other buildings on the campus, the newest one will be powered by renewable energy from the Wellsburg wind farm–built and maintained by MidAmerican Energy–which added 138 MW of capacity to the grid at the onset of the project in 2013.


    The company has a goal to power 50 percent of its data center operations with renewable energy by 2018. The Facebook data center in Altoona is the company’s first major foray into renewable energy (although it has a 100-kilowatt solar array at its Prineville, Oregon, data center).


    The company finished and opened the first phase in Altoona (a 476,000-square-foot, $300 million building) in late 2014. The second building spans 468,000 square feet and the third is 496,000 square feet before the cold storage expansion — a smaller, stripped-down data center the company uses to store rarely accessed customer photos and videos.


    Cold storage uses a relatively low amount of power and requires a lot of floor space. Facebook’s Prineville and North Carolina locations both have cold storage in addition to their full-size facilities. According to Facebook, “The [cold storage] data centers are equipped with less than one-sixth of the power available to our traditional data centers, and, when fully loaded, can support up to one exabyte (1,000 PB) per data hall.”


    The Menlo Park, California-based social media giant isn’t just adding square footage in Altoona; it expects to keep up to 800 construction workers busy on a daily basis until 2020 and add more full-time employees to the current 200. So far, teams have put in 2.75 million hours of work into the project, with plenty more to come over the next three years, according to Facebook.
    3:30p
    Massive, Six-Story Data Center in a Norwegian Mine Comes Online

    An abandoned mine-turned data center in Norway will officially open on Wednesday, according to Lefdal, builder of what it claims is “the largest green data center in Europe.” Its first two tenants are IBM and the German industrial conglomerate Friedhelm LOH Group.

    The Lefdal Mine Datacenter, built between the west Norwegian ports of Måløy and Nordfjordeid, offers 1.3 million square feet of space, much of it provided in containers designed, built, and shipped by Rittal. A fully customized container can be fitted out, shipped to the mine, hooked up, and brought online in six to eight weeks, Lefdal promises. There will also be three floors of traditional data center space, where more usual racks and cabinets can be installed.

    Drawing all of its power from hydroelectric and wind power produced locally, the data center can go up to 200MW once fully populated—larger than each of Facebook’s two 120MW facilities in northern Sweden and Apple’s planned data center in Denmark, estimated to be between 100MW and 144MW.

    See also: Facebook Data Center Coming to Denmark

    Situated within a deep fjord—much like the neighboring Green Mountain facility at Stavanger—the servers will be cooled using seawater cooling systems. The saltwater brought from the depths of the fjord is 45 degrees and cools less corrosive fresh water. The fjord water remains pressurized and requires little energy to pump, eliminating the need for costly high-capacity equipment.

    As a result, the facility will operate with a PUE between 1.08 and 1.1 for a 5 KW rack, according to LocalHost, designer of the unique cooling system.

    Mats Andersson, CEO for the project, described the massive data center:

    “It consists of six levels (with the potential to expand to 14), sprawling over some 1.3 million square feet of ‘white space’ that can be used for [data] storage, connected by a paved road that descends in a spiral through tunnels 45 feet wide and almost 30 feet high. Just one of those levels could host all the servers in Norway.”

    This illustration by Lefdal demonstrates the layout and the scale of its enormous underground facility.

    Lefdal:

    “The flexibility is unique. Not only will one be able to ‘plug and play’ capacity needs, but the large space and the logistics also allows different product solutions in a cost effective way.”

    Lefdal says it has already signed two international companies as its first tenants: IBM and the German industrial conglomerate Friedhelm LOH Group.

    Brian Farr, director of resiliency services for Europe at IBM Global Technology Services, wrote in a Forbes article:

    “For any data center construction, there are two common limiting factors: physical real estate and power capacity. This is one location where those two limiting factors disappear. It is hard to describe the scale of this facility. On the third level alone, where we are starting the buildout, we can house what would be thousands of virtual data centers that can scale flexibly, providing the perfect location to deliver hybrid IT, private and public cloud services for enterprises across Europe.”

    4:00p
    The Data Center of the Future and Cloud Disaster Recovery

    The data center of the future is a constantly evolving concept. If you go back to World War II, the ideal was to have a massive mainframe in a large room fed by punched cards. A few decades later, distributed computing promoted an Indiana Jones-like warehouse with endless racks of servers, each hosting one application. Virtualization upset that apple cart by enabling massive consolidation and greatly reducing the number of physical servers inside the data center.

    Now it appears that we are entering a minimalist period: Data center spaces remain but have been so stripped down that all that remains are a few desktops in the center of an otherwise empty space. Like a magic trick by David Copperfield, the Lamborghini under the curtain has disappeared in a puff of smoke. But instead of showing up at the back of the room, the compute hardware has been transported to the cloud. And just as in a magic trick, IT operation managers are applauding loudly.

    “We moved backup and disaster recovery (DR) to the cloud and now intend to move even more functions to the cloud,” said Erick Panger, director of information systems at TruePosition, a company that provides location intelligence solutions. “It looks like we are heading to a place where few real data centers will exist in most companies with everything being hosted in the cloud.”

    If that’s the overall direction, what does this mean in terms of disaster recovery? How will future file restoration function? How should data best be looked after? And how should the data center manager be preparing for these events?

    Hardware Be Gone

    Disaster recovery and its big cousin business continuity (BC) used to be a cumbersome duo. The data center manager was tasked to erect a duplicate IT site containing all the storage, servers, networking and software of the core data center. This behemoth stood idle as a standby site in case the primary site went down. Alternatively, the data center purchased space at a colocation facility to host the standby gear in time of need.

    Over time, the inefficiency of this set up became apparent. The concept of mirrored data centers came into being where two or more active data centers acted as a failover site for each other.

    An “active-passive” DR model meant having a disaster recovery site that is used for test and development when not in disaster mode. An “active-active” model, on the other hand, called for splitting and load balancing a workload across both sites. This trimmed down the amount of hardware, but it remained costly in terms of Capex and management.

    What we see happening now is a focus on consolidation of resources and a lowering of Capex costs, said Robert Amatruda, product marketing manager for data protection, Dell Software. That’s why so many companies are linking up to an outsourced data center and being able to leverage all of the efficiencies that cloud models offer, not the least of which is the ability to have a true cloud disaster recovery and business continuity framework that is beyond the resources of many data centers.

    “The data center of the future is all about efficiency and scale, having pointers to content indexes so you can resurrect your data, and having a myriad of options to failover to both inside and outside the data center,” said Amatruda. “Especially as cloud becomes more prevalent, the notion of companies having infrastructure that they own and are financially responsible for is becoming increasingly obsolete.”

    Some, of course, take it to extremes. Media giant Condé Nast, for example, pulled the plug on its 67,000-square-foot data center a few years ago, and sold it, preferring to use only cloud services. The rationale: to focus on its core function of content creation and the IT resources needed for that. The company gave the IT load to Amazon Web Services (AWS). Over a three-month period, its data center migrated over 500 servers, one petabyte of storage, more than 100 networking and over 100 databases to AWS. As it was already advanced in the process of virtualization, this sped the transition. It resulted in the  performance of core IT content-related functions raised by almost 40 percent and operating costs down about 40 percent.

    But not everyone is ready to push everything to the cloud – yet. For one thing, there is a lot of investment already in on premise gear. Amatruda said we are seeing many solutions that are specifically designed and built to act a bridge between legacy architecture and a hybrid architecture of part cloud/part data center. That means focusing on being able to manage data both on-premises and off, and being able to deliver functionality like content indexing to provide resiliency.

    “Instead of ensuring that data is recoverable, more organizations are concerned with having an always-on architecture, whereby resiliency is built directly into the architecture itself,” he said. “You’re seeing more products deal with cloud connection capabilities so that users can manage data outside the walls of their physical data center.”

    Blurred Lines

    A blurring of the lines, then, appears to be happening between physical and virtual. With tools that exist to make it somewhat irrelevant whether data sits in a rack in the next room or in some nebulous cloud-based data center, the big point to grasp is that the future of BC/DR is moving away from the traditional concepts of primary site and recovery site. Instead, it is shifting toward an ability to seamlessly migrate or burst workloads from site to site, for resiliency reasons, but also for peak demands, or even for cost or customer proximity reasons, said Rachel Dines, senior product marketing manager for SteelStore, NetApp.

    “These sites could be customer owned, at a private cloud, hosted or a colo, but the key is that data must be able to dynamically shift between them, on demand, while maintaining always-on availability — another term for this is a data fabric,” she said.

    This means incorporating more cloud infrastructure into architecture as such services make a lot of sense for backup and disaster recovery workloads. They tend to be inexpensive and are a good way to provide greater data protection while getting more comfortable with the cloud in order to help organizations take the next step forward in maturing their resiliency practices. This also opens the door to the addition of data reduction techniques like deduplication, compression, differential snapshots and efficient replication.

    “These technologies can reduce storage footprints by up to 30 times when used in backup and DR environments,” says Dines.

    Consequently, the data center of the future will be one where there will be significantly less redundant data which makes the data center more scalable, whether it is in-house or in-cloud.

    “The cloud is creating a level of scalability that didn’t exist in the data centers of the past,” says Amatruda.

    Step Back

    With all these new DR/BC concepts being thrown at the data center manager, what is the best course forward? Greg Schulz, an analyst with Server and StorageIO Group, says it is time to start using both new as well as old things in new ways, stepping back from the focus on the tools and technologies to how those can protect and preserve information.

    “Revisit why things are being protected, when, where, with what, and for how long and review if those are meeting the actual needs of the business,” said Schulz. “Align the right tool, technology and technique to the problem instead of racing to a technology or trend then looking for a problem to solve.”

    Drew Robb is a freelance writer based in Florida.

    6:39p
    French Websites Knocked Offline in Cyber-Attack on Cedexis

    Carol Matlack (Bloomberg) — The websites of several major French media outlets were knocked offline Wednesday during a cyber-attack against Cedexis, a Paris-based provider of network and cloud technology to corporate customers.

    The newspapers Le Monde and Le Figaro were among those that reported their sites were briefly shut down by the attack, which occurred during the afternoon in Paris.

    Three of five networks that Cedexis operates to manage web traffic for clients were partly disabled by a “significant DDOS,” or distributed denial-of-service attack, Julien Coulon, the company’s co-founder, said in an interview. In a DDOS attack, a network is overwhelmed by a barrage of data, typically deliberately. Coulon said the company hadn’t yet identified the source of the attack. He said services to clients were fully restored by early evening.

    See also: Cloud Giants Likely to Beef Up Bandwidth to Fight IoT Botnets

    Cedexis, which was founded in France in 2009 and has U.S. headquarters in Portland, lists on its website customers including major French companies such as Airbus SE, Air France-KLM, and Total SA, as well as U.S. companies such as Microsoft Corp. and publicly supported broadcaster PBS. Bloomberg LP, the parent of Bloomberg News, also is a Cedexis customer.

    7:08p
    Microsoft Unveils New Cloud Services for AI and Industrial Sensors

    Dina Bass (Bloomberg) — Microsoft Corp. is showing off new cloud services for AI and industrial sensors as well as database software tools designed to give Oracle Corp. a headache.

    One artificial intelligence service uses the company’s ability to automatically translate languages to add subtitles to PowerPoint presentations, while another lets customers index video to identify a particular speaker by sight or tag when a word or phrase is uttered, company executives said. The indexer can be used both to find specific things in hours of footage and to better match ads to clips. Microsoft’s collection of AI services for customers now numbers 29.

    At the Build developer conference in Seattle, Chief Executive Officer Satya Nadella highlighted Azure cloud services for the Internet of Things, in which multiple sensors and smaller computing devices track data that can be analyzed by Microsoft’s cloud and AI tools. Where the company’s previous focus has been on transferring that information back to its data centers to analyze, Azure IoT Edge will allow that computing to take place on-site in local computing devices to speed things up. The focus is initially on industrial applications, and Nadella demonstrated how this approach allows faster responses to things like malfunctioning equipment at Sweden’s Sandvik Coromant, a maker of metal-cutting tools.

    “You can’t rendezvous all your data in the cloud,” Nadella said. “You want to be able to write logic that reacts to these events.”

    See also: Deep Learning Driving Up Data Center Power Density

    He showcased a future scenario called AI for Workplace Safety, in which a construction site can tag pieces of equipment with properties like how it should be placed and used, as well as who can use it. Using AI software that can recognize what’s going on from video and sensors, the system can trigger alerts if the machine isn’t operated correctly or if there’s a spill.

    Nadella has made AI and internet-based computing key areas of investment for Microsoft as it looks for new sources of growth. In both spaces, Redmond, Washington-based Microsoft is squaring up to rivals like Amazon.com Inc. – Amazon Web Services is previewing a product to make IoT devices smarter too – as well as Alphabet Inc.’s Google. At the same time, it’s trying to grab business from older competitors such as Oracle, even as the database giant makes its own foray into the cloud.

    See also: Top AWS Engineer Calls Hurd’s Cloud Data Center Bluff

    Microsoft is providing an early look at new tools intended to help customers switch from Oracle’s database to Microsoft’s rival product in the cloud, Azure SQL Database as a Service. The tools let users of Microsoft’s standard SQL product switch to the cloud version.

    The company also unveiled a cloud database it regards as a significant technological leap by providing what it terms a “planet-scale” product. Called Azure Cosmos DB, it can provide a single database instance across multiple countries so that information is always up to date anywhere. Retailer Jet.com is among the customers already using it, Microsoft cloud chief Scott Guthrie said. Database is another front in the battle for cloud customers between Microsoft and Amazon.

    See also:Google Launches Bare-Metal Cloud GPUs for Machine Learning

    While Microsoft is getting into newer technological spheres, its traditional businesses remain robust. Windows 10 is now in use on 500 million monthly active devices – the company had once been aiming for 1 billion in fiscal 2018 but its retreat from the phone market forced it to scale back volume ambitions. Office 365, the cloud versions of its popular workplace apps, has 100 million commercial users active each month.

    Its Cortana voice-controlled search service has more than 140 million monthly active users. Microsoft said it signed agreements with Intel Corp. to create reference designs for companies that want to make Cortana devices and with HP Inc. to make devices – though no specific products were announced. Harman Kardon has said it will make a competitor to Amazon’s Echo device that uses Cortana for sale later this year.

    9:22p
    Data Centers Need Better SLAs

    William D’Alessio is SVP of Enterprise Operations at Maintech.

    Service Level Agreements (SLAs) need to cover all aspects of a business and their subsidies, which means they are often broad and generic and can leave your data center unprotected. The SLA made with the Original Equipment Manufacturers (OEMs) is used as a way to ensure timely repairs and any service needs. What often happens with a typical SLA, however, is providers can wait until the last minute of a quoted time frame to repair your systems, causing your business costly downtime. Doing this is not a breach of contract, though it can be frustrating for businesses who need to keep equipment in use full-time.

    An enhanced support SLA can help avoid these pitfalls.  Enhanced SLAs can supplement your existing warranty, offer flexibility and cost savings, and extend the life of your equipment.

    If your business has had problems in the past with an SLA, then it’s time to consider an enhanced support SLA.

    Better Warranty Options

    Data center equipment usually comes with a basic OEM warranty. This warranty covers defects in the product but is not very flexible for repairs or replacements. A common problem for data centers occurs when equipment comes from multiple manufacturers, meaning an SLA from one manufacturer will only cover that company’s equipment, not taking into account the other equipment affected by the problem. Therefore, companies are left scheduling site visits from multiple technicians. And enhanced support SLA can supplement each basic warranty, so your company gets a faster response time that includes solving the issue of all equipment in a specific timeframe, which means less downtime.

    Customized Support

    A basic OEM support agreement can often prove too standardized for companies. As companies expand technologies in data centers, this type of agreement does not work. An enhanced support SLA, however, can customize the support for your company. This agreement can be customized with your third-party maintenance provider, so your data center receives the exact kind of support necessary. It also helps maintain multi-vendor setups for better integration among equipment.

    Avoids Costly Downtime

    Another common issue that occurs is that a data center chooses a basic OEM warranty thinking they can upgrade it later. Though this saves money in the short-term, upgrading a warranty package is extremely costly. In the case of replacing parts, many basic warranties offer next-day parts, but that could mean 24-hours of downtime, which is really expensive for most data centers. An enhanced support SLA could, in this case, offer parts in a few hours time, instead of a full day of downtime, which is much more cost-effective than losing an entire business day waiting for a part.

    Longer Equipment Warranty

    Basic OEM warranties cover three years of the equipment’s lifespan, but after those three years, the cost of repairing any broken or damaged equipment comes out of your company’s pocket. Many businesses want more than three years coverage, since the equipment, if maintained properly, can last much longer than that. An enhanced support SLA can supplement and enhance the time period of the equipment’s warranty. This allows businesses to get more out of equipment and avoid costly repairs and replacements.

    An enhanced support SLA can help businesses save time and money by keeping equipment working better and longer. Without this type of support, IT staff can spend too much time figuring out how to repair or replace equipment, instead of focusing on more important tasks.

     

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    10:14p
    Evocative Appoints New CEO, Acquires Bay Area Data Centers

    West Coast data center provider Evocative has expanded its San Francisco Bay Area presence, acquiring two facilities from 365 Data Centers, one in Emeryville and the other in San Jose.

    The deal took place before 365 was acquired by a pair of private investors last month. Terms of the deal were not disclosed.

    The company also announced appointment of a new CEO. Arman Khalili, a long-time internet infrastructure entrepreneur, took Evocative’s helm in February, the company announced Wednesday.

    See also: Investors Take Over 365 Data Centers, What’s Next?

    The two former 365 facilities bring Evocative’s portfolio to five locations — which also include San Francisco, Los Angeles, and Las Vegas – adding 40,000 square feet of data center space total.

    This is the second time the Emeryville facility has come under Evocative’s ownership. The company sold it to 365 Main (a past incarnation of 365) in 2013.

    Khalili’s most recent venture prior to joining Evocative was running CentralColo, a data center provider he founded. He’s also worked as CEO of Black Lotus, a DDoS mitigation company acquired by Level 3, and founded UnitedLayer, a San Francisco data center provider.

    In a statement, Khalili said he plans to continue expanding Evocative, through both construction and acquisition. So far, that strategy appears to be focused on the West Coast, where the company is planning to open more data centers within the next year.

    << Previous Day 2017/05/10
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org