Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, December 1st, 2014

    Time Event
    1:00p
    In Americas, Equinix Focused on Brazil and New Jersey Markets

    As it builds out is footprint in New Jersey and Brazil, Equinix is expanding its Business Suites product (an offering closer to wholesale) and positioning another new product, Performance Hub, to capture enterprise demand. Karl Strohmeyer, the company’s president for the Americas, gave us an update on its strategy in the region.

    Equinix has had a majority stake in ALOG since 2011 and finally acquired the company in July. ALOG gave Equinix home footing in Brazil in the form of four data centers, two in São Paulo and two Rio de Janeiro, the two largest markets in the country. The company just kicked off Phase Two of the RJ2 data center in Rio de Janeiro.

    RJ2 will be the first Tier III Certified data center in the city of Rio de Janeiro, as well as one of the biggest data centers there at over 160,000 square feet. A rain water harvesting system is one unique touch, expected to reduce water consumption by 70 percent.

    Strohmeyer is optimistic about the South American market. “It’s in a unique position – Brazil’s an interesting dichotomy, because it’s closed, but for communications, it’s one of the fastest growing regions in the world,” he said. “It’s also one of the largest, if not fastest growing. We expect that to continue.”

    The data center will be one of the first Equinix facilities to have dedicated space for containers, which will allow customers to ship and drop in their infrastructure within an Equinix data center. Strohmeyer notes that the company is seeing an increase in cross-border business, with containers one potential avenue for those global companies looking to establish presence in Brazil.

    Equinix data centers all share a certain aesthetic, and this is true with the four Brazilian data centers that came with ALOG acquisition. “It’s been three and a half years, the company was funded and built within that time frame,” said Strohmeyer. “It looks like a purpose-built Equinix building. It’s important that when a customer deploys anywhere in the world that they get the same experience and SLAs.”

    Business Suites Coming to New Jersey

    The Business Suites are a departure from the traditional Equinix retail colocation model. Each business suite is a dedicated room with keycard access. The product is currently available only in DC10 in Ashburn, Virginia, but Equinix is constructing its NY6 facility in Secaucus in a similar fashion.

    Business suites are a way for someone with a larger footprint requirement to tap the interconnection benefits of Equinix in more of a wholesale type setting. Most of the inventory in DC10 was sold out as of last May.

    Ashburn and New York are obvious choices for Business Suites, as they’re the two largest data center markets in the country. Despite performing well, there are currently no plans for a wider rollout.

    Performance Hub Leads Enterprise Push

    The company has historically done well with web-scale companies and has in recent times focused on capturing enterprise business. There are a few initiatives driving the enterprise business.

    “We have positioned Platform Equinix as a key part of [the enterprise’s] IT infrastructure and it’s really taken off,” said Strohmeyer. The cornerstone of this effort is Performance Hub.

    Performance Hub combines elements of data centers, networking infrastructure, and connectivity with cloud computing access to improve application performance.

    “It’s a bit of a Trojan horse – they start with three-four racks across the globe and a lot of interconnection. Once they see the power of a third-party data center, they then start growing their deployments.”

    Cloud Has Been Good for Equinix

    Not only have cloud providers proven to be desirable customers, data center providers enabling cloud usage in general have been a boon. Equinix recently noted cloud connectivity is its fastest growing segment; Stohmeyer reiterated cloud’s place in the Equinix strategy. Through providing easy connectivity, Equinix has been marketing toward hybrid and multi-cloud usage to entice enterprises.

    Consider this stance versus what some pundits originally thought cloud would do to the data center industry. Jim Cramer advised his Mad Money viewership to get out of data center stocks because cloud was a risk twice, once in 2009 and again in 2011.

    Most recently, IDC said that cloud was one contributing factor to what it predicts will be a decline in the number of data centers. The biggest cause of the decline is not a slowing of the colocation industry, but declining enterprise data center growth and outsourcing. This is positive for colocation providers also offering cloud connectivity, and Equinix wants to capture that business.

    “We love what cloud is doing to the proverbial basement,” said Strohmeyer. “It’s forcing the CIO to think about how they move workloads away from their own data stores. It’s a catalyst for a bigger architectural discussion.”

    4:30p
    Eliminating Downtime: Six Key Considerations for Your Hosting Architecture

    Jeffrey Papen is CEO and founder of Peak Hosting, a managed hosting provider that has helped design, build, maintain and support some of the world’s largest Internet properties.

    There was a time when conventional wisdom held that network downtime was unavoidable, and while it could be minimized, it was next to impossible to eliminate. However, for companies that rely on their network being up 24/7 in order for their business to run, any downtime, no matter how minimal, is unacceptable.

    The good news is that eliminating downtime completely is possible. It starts with the physical infrastructure, but the software element is critical as well. By making the right hosting architecture choices up front, companies can ensure that their site will always be available with no interruptions to the end user experience. Even during scheduled maintenance the system will be up and running and available to customers.

    Eliminating downtime comes from properly architecting your system. We all know that uptime doesn’t come cheap, but saying, “It’s OK for X to fail because I can always provision more in my cloud” doesn’t make it true from your customer’s perspective: When their systems fail, they don’t become understanding, they become angry.

    Six Architecture Choices to Eliminate Downtime

    Design for a true 2N architecture. When designing a hosting environment, you should literally install two of everything. This means dual power supplies, dual hard drives, dual PDUs (Power Distribution Units), dual UPSs (Uninterruptable Power Supplies), dual generators, dual top of rack switches, dual NICs. The list goes on; just make sure there are two of whatever gets put in.

    Although the cloud hosting industry promises to replace a failed hard drive within an hour, they still only leverage a 1N architecture, meaning it will require you to spend hours, or even days, getting your code, configuration and data back to its pre-crash state. The impact of this is a double whammy, because not only was your service down for an extended period of time, you also will need to pull staff away from focusing on core company competencies that advance the business in order to redeploy your environment. It is inevitable that parts will eventually fail, but a 2N architecture ensures that component failure doesn’t have to equal service failure.

    Leverage RAID. Carrying on with the theme of 2N and dual hard drives, you should implement some sort of RAID solution, with RAID 1 being the bare minimum. RAID 1 will ensure that you have an exact mirrored copy of your hard drive ready to go immediately should a hard drive fail. Depending on your system’s performance and complexity, you should also consider RAID 6 and 10 (Note: RAID 5 is a bad idea).

    Buy the best hardware. The old adage is true: you get what you pay for. If all your company cares about is buying the cheapest hardware, you’re going to have very high failure rates. Purchasing hardware from the best vendors with the best reputations will ensure that you’re starting with quality products.

    Burn in your infrastructure for 72 hours. No amount of 2N architecture can prevent a hardware failure from a motherboard, CPU, or RAM DIMM failing. However, if you burn in your system for at least three full days before you put it into production, you can generally discover any hardware issues before they impact your service.

    Design for ongoing maintenance. It’s not just component failure that you need to be prepared for; you also have to upgrade code and software. By designing in a 2N architecture throughout your entire environment, every moving part on a server can be maintained without taking the service offline.

    Be prepared for a catastrophe. While it’s incredibly rare, a catastrophic event could take an entire site out. The software you choose will need to support the ability to failover to another facility. Make sure you have a back-up facility in place.

    When it comes to hosting solutions, remember this – it doesn’t have to fail in the first place. Design accordingly and keep your service always up and your customers always happy.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:42p
    Seagate Moves Servers 220 Feet Underground

    Seagate, known primarily for its data storage products, has agreed to move servers it uses to provide cloud backup and disaster recovery services, into an underground data center near Pittsburgh, Pennsylvania, operated by Iron Mountain.

    Iron Mountain’s National Underground Data Center is located 220 feet below ground in a former limestone mine in a town called Boyers, about 65 miles north of Pittsburgh. Iron Mountain has built secure storage facilities in the mine for movie studio reels and photo archives, as well as for document storage by the federal Office of Personnel Management.

    Over the past several years the company has built out data center space in the facility. In 2013, the company announced it would become a data center provider and hired Compass Datacenters to build another server farm for it in Boston suburbs.

    The Seagate deal is both a colocation deal and a reseller partnership. Iron Mountain will resell Seagate’s cloud backup and DR services hosted at its data center.

    In a statement, Jason Buffington, senior data protection analyst for the Enterprise Strategy Group, said, “Combining Seagate’s expertise as an early innovator in hybrid data protection, built on on-premises appliances and cloud repositories with Iron Mountain’s solid reputation as a trusted protector of data and assets should be a welcome pairing for customers and partners looking for hybrid protection.”

    There are several data center facilities built underground in former mines and military bunkers in the U.S., Canada, Europe, and Asia. See a list of subterranean data centers compiled by Data Center Knowledge.

    8:06p
    Chinese Tech Giants Invest $300M in Data Center Provider 21Vianet

    Chinese data center company 21Vianet has received a big investment from several affiliates of Chinese software developer Kingsoft Corp. totaling $296 million.

    The affiliates include Kingsoft, mobile phone vendor Xiaomi, and Temasek, an investment company based in Singapore. There is a lot of crossover between the companies, but the bottom line is it’s a major investment in a Chinese data center player by several of the country’s major technology companies.

    The investment puts carrier neutral 21Vianet in good position to expand and capture some of the burgeoning Chinese cloud and data center market.

    21Vianet will build and maintain data centers in China for Kingsoft. The data center deal is estimated to be at least 5,000 cabinets worth of space over the next three years.

    Kingsoft is buying a 20 percent stake in the service provider for $172 million and will hold about a fifth of the voting power at 21Vianet’s general meetings. The company is immediately paying $51.6 million to 21Vianet, with the rest to be settled by April 30 next year.

    Chinese billionaire Lei Jun has a 14.8 percent stake of Kingsoft and is also the CEO of major mobile phone vendor Xiaomi. Xiaomi will invest $50 million in 21Vianet and hold around 3.5 percent interest with 10 percent voting power. Temasek is investing $74 million for about 13 percent and 5.8 percent voting power.

    Chinese Cloud Market Attracts Global Players

    The data center market in China is up around 20 percent from last year and there have been several new builds announced, including by NTT and CenturyLink, the latter recently launching a data center in Shanghai. Several technology vendors like Oracle and CloudFlare are also planning their approach to the market.

    The major public cloud providers have been entering the Chinese cloud market as well. Microsoft’s Azure launched in China through 21Vianet. 21Vianet is also the local partner for IBM SmartCloud Enterprise+. IBM also partnered with Tencent, which owns a 12.6 percent stake in Kingsoft.

    Amazon Web Services is launching a Chinese region through ChinaNetCenter.

    There are several regulatory compliance needs to get within what many dub as “The Great Firewall of China.” Officially called the Golden Shield Project, it is essentially an Internet surveillance and censorship system that makes it hard to serve Chinese customers from outside. Many outside players have gone the partnering route, and 21Vianet has been a beneficiary of that trend.

    “We welcome the new strategic investments by Kingsoft and Xiaomi and the additional investment by Temasek,” Josh Chen, co-founder, chairman and CEO of 21Vianet, said in a statement. “As we remain fully committed to the carrier-neutral, customer-neutral, and cloud-neutral value proposition, we are confident that their investments offer significant strategic value in strengthening our core operations and expanding new business opportunities.”

     

    9:00p
    Telecom Transparency Reports Could Reveal Sensitive Government Details: Canadian Official

    logo-WHIR

    This article originally appeared at The WHIR

    A senior Canadian public safety official has warned against telecoms sharing details of their cooperation with law enforcement through transparency reports, noting that the practice could reveal “sensitive operational details.”

    According to a report on Monday by The Canadian Press, senior assistant deputy minister for national and cybersecurity Lynda Clairmont said in a classified memo that telecoms efforts to reveal more about police and intelligence requests to their customers would require “extensive consultations with all relevant stakeholders.”

    The memo was released under the Access to Information Act. Clairmont wrote the memo to deputy minister Francois Guimont in April, offering advice ahead of his meeting with representatives from Canadian telecom Telus Corp.

    Telus released its first transparency report in September. The report said that Telus received about 103,500 official requests for information in 2013. The majority of requests were in emergency situations, such as to verify the location of 911 callers, and the company received around 4,300 court-ordered requests.

    While transparency reports have become quite common in the US as customers demand more information about how their data is being accessed by the government, few Canadian telecoms have followed suit. Rogers and Telus have both released transparency reports, while Bell Communications hasn’t published one yet.

    Google began issuing transparency reports in 2009, and since then, there has been an increase in non-FISA or National Security Letter demands of more than 250 percent, while international criminal investigations increased by 15 percent since the first report.

    In the memo, Clairmont said that “transparency is key to giving Parliament and Canadians confidence in our ability to meet both these objectives [security and privacy], but most continue to ensure that sensitive operational details remain protected.”

    She said that it should be reviewed whether details could be revealed through “mass aggregate reporting of data.”

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/telecom-transparency-reports-reveal-sensitive-government-details-canadian-official

    9:30p
    Microsoft Confirms Acquisition of Mobile Email App Acompli

    logo-WHIR

    This article originally appeared at The WHIR

    After Microsoft accidentally published a blog post announcing an acquisition last week, mobile email application Acompli confirmed the rumor on Monday. The terms of the deal have not been disclosed.

    Acompli CEO Javier Soltero said in a blog post that over the next few months, the company will “be sharing more about the exciting product plans” it has as a part of Microsoft.

    “Your app and accounts will continue to work and the team will continue on our fast pace of improving and adding new functionality every couple of weeks,” Soltero said.

    Acompli was founded 18 months ago, and soon after its launch it started working with enterprise IT departments. It was around that time that it began talking to Microsoft about further integrating Office 265 into its product.

    “Those conversations led to today, where we have decided the opportunity to join forces in pursuit of a better, faster, more powerful email experience is something we can do better as one company,” he said.

    In a post on Microsoft’s blog, Rajesh Jha, corporate vice president, Outlook and Office 365 reiterated how the acquisition will help Microsoft improve the mobile email experience. He said: “In a world where more than half of email messages are first read on a mobile device, it’s essential to give people fantastic email experiences wherever they go.”

    “The Acompli team is passionate about this quest,” Jha said. “Their app provides innovative ways to focus on what’s important in your inbox, to schedule meetings, and work with attachments and files. Users love how it connects to all email services and provides a single place to manage email with a focus on getting things done.”

    Jha said that Microsoft will reveal more details over the coming months.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/microsoft-confirms-acquisition-mobile-email-app-acompli

    11:08p
    CoreOS Blasts Docker for “Broken Security,” Builds Own Container Engine

    CEO of CoreOS, the startup with an operating system built specifically for massive server clusters in web-scale data centers, said Docker had a “broken security model” and announced an alternative CoreOS-built container runtime called Rocket.

    “At CoreOS we have large, serious users running in enterprise environments,” CoreOS co-founder and chief executive Alex Polvi wrote in a blog post published Monday. “We cannot in good faith continue to support Docker’s broken security model without addressing these issues.”

    Polvi’s post is surprising in light of his company’s strong support for Docker throughout its short existence. As he pointed out himself, CoreOS co-founder and CTO Brandon Philips has been one of the top contributors to Docker and serves on the open source project’s governing board.

    Docker, both an open source project and a company, has been around since 2013. Its standard format for application containers quickly gained support from startups and major enterprises and service providers.

    Amazon Web Services announced a Docker container management service in November. The same month, Microsoft launched a Windows command line interface for Docker. Until then, users could only manage Docker containers on Linux.

    CoreOS published Polvi’s post three days before Docker kicks off DockerCon Europe 2014 in Amsterdam – its first conference outside of the U.S.

    Ben Golub, CEO of Docker the company, questioned the post’s timing. “While we disagree with some of the arguments and questionable rhetoric and timing of the Rocket announcement, we hope that we can all continue to be guided by what is best for users and developers,” he wrote in a response published on the Docker blog.

    Docker Accused of Changing Direction

    Polvi’s issue isn’t only with Docker’s security model but also with what he perceives as a change in direction from goals stated originally. Instead of focusing on containers as simple standard component, Docker has been building a wide range of tools around containers, according to him.

    “Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server,” Polvi wrote.

    CoreOS founders were originally attracted to Docker because they liked the idea of a standard container, “a simple, composable unit that could be used in a variety of systems.”

    “This was a rallying cry for the industry and we quickly followed,” Polvi wrote. “Unfortunately, a simple re-usable component is not how things are playing out.”

    Docker CEO Responds

    Golub said Docker had been working on a comprehensive set of orchestration services because the company wanted to help users deploy multi-container applications distributed across multiple hosts. Docker builds all these tools to make sure these multi-container users have the same clean and open interface as single-container users do, he wrote in his response.

    Golub’s post was an initial response to the Rocket announcement. Docker CEO promised to address Polvi’s technical arguments in a later post. He pointed out that arguments like this were a normal part of the open source process, and that everybody was welcome to use Docker containers however they wanted or propose alternative standards.

    Different Container Standard Proposed

    The standard CoreOS is proposing is called App Container. It defines a specification of the facilities surrounding the container.

    Here is what’s important in the design of a container, according to CoreOS:

    • Composable. All tools for downloading, installing, and running containers should be well integrated, but independent and composable.
    • Security. Isolation should be pluggable, and the crypto primitives for strong trust, image auditing and application identity should exist from day one.
    • Image distribution. Discovery of container images should be simple and facilitate a federated namespace, and distributed retrieval. This opens the possibility of alternative protocols, such as BitTorrent, and deployments to private environments without the requirement of a registry.
    • Open. The format and runtime should be well-specified and developed by a community. We want independent implementations of tools to be able to run the same container consistently.

    Rocket is a command line tool that implements these facilities. App Container is an open spec and other systems can implement it without using Rocket.

    CoreOS may contribute App Container support to Docker once it matures. The company also plans to continue to ensure its operating system supports Docker.

    Pivotal on Board With App Container

    Pivotal, the EMC spinoff that helps enterprises adopt agile software development process, cloud, and big data, and sells them the infrastructure services necessary to do that, has expressed support for Rocket and App Container.

    Andrew Clay Shafer, co-founder of Puppet Labs who recently joined Pivotal as director of technology, also penned a blog post Monday, saying the company has been collaborating with CoreOS on the standard. “When we saw the progress that CoreOS had made, and their openness to input and contribution, we decided that Pivotal needed to get involved in the App Container effort,” he wrote.

    << Previous Day 2014/12/01
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org