Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, November 14th, 2014

    Time Event
    1:00p
    Netcraft: DigitalOcean Now Third-Largest Cloud

    DigitalOcean is a late comer to the cloud provider scene that initially popped up on people’s radars because of its phenomenal growth. That growth has continued unabated, and DigitalOcean is now the third-largest cloud in the world among web-facing clouds, according to Netcraft, an English Internet services company.

    The growth chart doesn’t look like a hockey stick. Instead, it’s a perilous incline that would terrify the most seasoned of rock climbers. The number of DigitalOcean’s web-facing servers has grown 400 percent in the last year, reaching 116,000 and surpassing Rackspace. To be fair, Rackspace has a very different business model, but the comparison is telling, given how much younger DigitalOcean is as a company. It had fewer than 140 servers in December 2012.

    DigitalOcean raised $37.2 million in an Andreessen Horowitz-led Series A funding round in March.

    “We were certainly a late entrant,” said Mitch Wainer, DigitalOcean co-founder and chief marketing officer. “We felt the cloud hosting issue was broken and saw it as an opportunity to disrupt.”

    It’s all about the singular, razor -sharp focus on developers, according to Wainer. “If you look at all the different providers and how they deliver products, it’s very clunky,” he said. “We prioritized user experience on day one and focused on catering, empowering, enabling the developer. It’s always been our mantra.”

    The company simplified developing an app on cloud and made it easy to deploy online, something they felt was missing from the other cloud providers. “If you’re a dev, there was no easy solution to deploy online,” said Wainer. “You’d have to go through an enormous amount of steps and wait to deploy it.”

    DigitalOcean bleeds the lines of Infrastructure-as-a-Service and Platform-as-a-Service by offering one-click install applications. Developers like the SSDs, low-price bandwidth, and fast deployment. “Droplets” can be provisioned in under a minute in five markets: New York, Amsterdam, San Francisco, Singapore, and London.

    The company just introduced native CoreOS support and recently partnered with Docker to release the latest version. It also partnered with Mesosphere to automate cluster provisioning at scale. “Containerization, Docker, CoreOS are all huge right now,” said Wainer. “It’s the new and better way to sort of extract DevOps work. To organize and structure your large-scale infrastructure environments.”

    While Google, Amazon, and Azure have all intensified their developer focus and services, Wainer doesn’t see it as a threat. “They see the traction, believe in it, and obviously buy in,” he said. “What we’ve been able to do is be really smart with messaging and marketing. We clearly identify that this is for developers. Even AWS can say they’re catering to the developer all they want, but they’re doing a lot of things that spread their attention. Our position in the market is top-of-mind for developers. Even if it’s on a personal project level.”

    In October, Netcraft found 116,000 web-facing computers at DigitalOcean, 7,577 more than previous month. There were 88,000 discovered in July. And those are just web-facing servers.

    A sizable amount of sites picked up by Netcraft were net new customers with over 68,000 in a month. The statistics also reveal some migrations from big host names to show which hosting providers the rest are coming from.

    DigitalOcean opened a region in Singapore earlier this year and it is already second to Amazon in the count. Newest region the U.K. is growing nicely as well. It added the Singapore region in February and the London region in July.

    Wainer said that the company has successfully expanded talent and infrastructure and DigitalOcean is hiring aggressively right now.

    “We all enjoy operating the business like a startup,” said Wainer. “Agile, scrappy – to iterate faster, deploy and launch products faster. To continue to disrupt and think differently. When that goes away, that’s when things get boring. We’re going to keep this train moving.”

    4:30p
    Friday Funny Caption Contest: Pies

    The temperatures are dropping fast and the flurries are on their way. Let’s add a little warmth to this Friday afternoon with a brand new Kip and Gary!

    Diane Alber, the Arizona artist who created Kip and Gary, has a new cartoon for Data Center Knowledge’s cartoon caption contest. We challenge you to submit a humorous and clever caption that fits the comedic situation. Please add your entry in the comments below. Then, next week, our readers will vote for the best submission.

    Here’s what Diane had to say about this week’s cartoon, “I think Kip found a good hiding spot for the pumpkin pies this year!”

    Congratulations to the last cartoon winner, Ben, who won with, “We should get Linus on the phone and tell him we found the Great Pumpkin!”

    For more cartoons on DCK, see our Humor Channel. For more of Diane’s work, visit Kip and Gary’s website.

    7:00p
    Facebook Launches Iowa Data Center With Entirely New Network Architecture

    Facebook announced the launch of its newest massive data center in Altoona, Iowa, adding a third U.S. site to the list of company-owned data centers and fourth globally.

    The Altoona facility is the first in Facebook’s fleet to feature a building-wide network fabric – an entirely new way to do intra-data center networking the company’s infrastructure engineers have devised.

    The social network is moving away from the approach of arranging servers into multiple massive compute clusters within a building and interconnecting them with each other. Altoona has a single network fabric whose scalability is limited only by the building’s physical size and power capacity.

    Inter-Cluster Connectivity Became a Bottleneck

    Alexey Andreyev, network engineer at Facebook, said the new architecture addresses bandwidth limitations in connecting the massive several-hundred-rack clusters the company has been deploying thus far. A huge amount of traffic takes place within each cluster, but the ability of one cluster to communicate with another is limited by the already high-bandwidth, high-density switches. This means the size of the clusters was limited by capacity of these inter-cluster switches.

    By deploying smaller clusters (or “pods,” as Facebook engineers call them) and using a flat network architecture, where every pod can talk to every other pod, the need for high-density switch chassis goes away. “We don’t have to use huge port density on these switches,” Andreyev said.

    It’s easier to develop lower-density high-speed boxes than high-density and high-speed boxes, he explained.

    Each pod includes four devices Facebook calls “fabric switches,” and 48 top-of-rack switches, every one of them connected to every fabric switch via 40G uplinks. Servers in a rack are connected to the TOR switch via 10G links, and every rack has 160G total bandwidth to the fabric.

    Here’s a graphic representation of the architecture, courtesy of Facebook:

    Facebook Network fabric

    The system is fully automated, and engineers never have to manually configure an individual device. If a device fails, it gets replaced and automatically configured by software. The same goes for capacity expansion. The system configures any device that gets added automatically.

    Using Simple OEM Switches

    The fabric does not use the home-baked network switches Facebook has been talking about this year. Jay Parikh, the company’s vice president of infrastructure engineering, announced the top-of-rack switch and Facebook’s own Linux-based operating system for it in June.

    The new fabric relies on gear available from the regular hardware suppliers, Najam Ahmad, vice president of network engineering at Facebook, said. The architecture is designed, however, to use the most basic functionality in switches available on the market, which means the company has many more supplier options than it has had in the older facilities that rely on those high-octane chassis for inter-cluster connectivity. “Individual platforms are relatively simple and available in multiple forms or multiple sources,” Ahmad said.

    New Architecture Will Apply Everywhere

    All data centers Facebook is going to build from now on will use the new network architecture, Andreyev said. Existing facilities will transition to it within their natural hardware refresh cycles.

    The company has built data centers in Prineville, Oregon, Forest City, North Carolina, and Luleå, Sweden. It also leases data centers space from wholesale providers in California and Northern Virginia, but has been moving out of those facilities and subleasing the space until its long-term lease agreements expire.

    In April, Facebook said it had started the planning process for a second Altoona data center, before the first one was even finished, indicating a rapidly growing user base.

    The company has invested in a 138 megawatt wind farm in Iowa that will generate electricity for the electrical grid to offset energy consumption of its data center there.

    7:44p
    365 Data Centers Opens Nashville Technology Hub

    365 Data Centers launched a Nashville technology hub for startups and upgraded its data center downtown. A ribbon cutting ceremony was held this week featuring the city’s mayor Karl Dean.

    The hub will give free colocation space, power, and Internet services to select local startups and businesses. The company’s 17,000 square foot data center has upgraded power, cooling, and uninterruptible power supplies. It has maintained 100 percent uptime in the past 10 years, said CEO John Scanlon. The Nashville technology hub is located at 147 Fourth Ave. N.

    The company’s strategy is to serve small and mid-size businesses in second-tier markets. The colocation provider recently entered into the cloud storage business to further entice SMBs.

    In Nashville, it is taking initiative by partnering with community to boost the local economy. 365 is an investor in Partnership 2020, the Nashville Area Chamber’s economic development initiative responsible for driving local business. It will provide some organizations in conjunction with P2020 with free and reduced-cost data center services.

    “365 has proven to be an exceptional community member and partner for the chamber and our P2020 program,” Judith Hill, vice president of business retention and expansion for the Chamber of Commerce, said. “Data centers and technology are well documented drivers of job and business growth, and 365′s commitment to providing state-of-the-art colocation facilities and cloud services in downtown Nashville helps us tremendously to achieve our goals.”

    The company is also partnering with other local civic and business leaders, including Matt Wiltshire, director of the Mayor’s Office of Economic and Community Development, and Bryan Huddleston, CEO of the Nashville Technology Council.

    The carrier-neutral facility includes 17,000 square feet of developed colocation space, redundant power from Nashville Electric Service, three 225 kVA UPS systems, 1.25 megawatts of generator power, and 280 tons of cooling capacity.

    Most of the data centers in the area are located outside of downtown. Compass Data Centers recently commissioned a facility in nearby Franklin, built using modular architecture. Windstream hosting was the customer.

    Peak 10 entered the market in 2006 through acquisition of RenTech and has added a few facilities since. Carter Validus Mission Critical REIT performed a sale-leaseback of an AT&T facility in Brentwook last year. zColo has a data center located downtown.

    A flood hit parts of Nashville during a Data Center World conference in 2010.

    8:16p
    Dept. of Energy Awards $300M Deal for IBM Supercomputers

    IBM has won a $300 million supercomputing contract with the U.S. Department of Energy. Lawrence Livermore National Lab’s “Sierra” and Oak Ridge National Lab’s “Summit” supercomputers will leverage IBM’s OpenPOWER processor architecture and are being hailed as a step toward exascale computing.

    Peak performance of the IBM supercomputers will be in excess of 100 petaflops, balanced with more than 5 petabytes of dynamic and flash memory. The systems will be capable of moving data at more than 17 petabytes a second. That’s equivalent to moving over 100 billion photos on Facebook in a second. It will be live in 2017.

    It will perform five to 10 times better than the current Titan supercomputer at Oak Ridge. It will also be five times more energy efficient, using roughly 10 percent more power than the current system.

    A “Data Centric” Approach

    A consortium of IBM, NVIDIA, and Mellanox are developing the Summit machine architecture for next-generation supercomputing. The architecture will enable a smaller number of nodes with a larger memory footprint, optimized for parallel codes. This contract involves three of the five founding members of the Open Power foundation, and the labs will leverage this architecture.

    “In data centric computing, the value is not tied to only petaflops, but speed of insights,” said Tom Rosamilia, senior vice president, IBM Systems and Technology Group. The goal is to limit data movement within the latest IBM supercomputers.

    The current model requires data repeatedly moving back and forth from storage to processor to drive insights. Design emphasis solely on microprecessors becomes progressively untenable. For this reason IBM has been pioneering the “data centric” approach, which embeds compute power everywhere data resides in the system.

    Oak Ridge’s Summit will be used to work with the nuclear energy industry to further optimize reactors fleet and perform climate modeling. “Systems like summit allow us to inject greater amounts and variety of data in new ways we’ve not been doing with Titan,” said Jeffrey Nichols, associate laboratory director of computing and computational sciences at the lab.

    “Systems like Summit allow us to inject greater amounts and variety of data in new ways we’ve not been doing with Titan,” said Nichols. “These are early steps towards exascale. We believe we have a good path going forward.”

    New System to Monitor Nuclear Stockpile

    Lawrence Livermore’s machine will be called Sierra. The lab runs some of the most complicated calculations on the planet, with codes running easily over a million lines. Key national security decisions are based on these calculations, including assessment of all stockpile systems and life extension of weapons.

    “Simulation is the integrating element in our program that makes it possible for the country to not return to nuclear testing in Nevada,” said Mike McCoy, head of advanced simulation and computing program at Lawrence Livermore National Laboratory.

    “How do we assure they do the job for the country?” asked McCoy, “We are not buying off the shelf. We engage in long-term relationships…We share the risk in the development.”

    IBM Research will work with Lawrence Livermore and Oak Ridge on scientific collaboration centered on these systems and help develop tools and technologies to optimize codes to achieve the best performance on the acquired systems.

    NVIDIA GPUs to Supercharge the System

    NVIDIA brings three technologies to the fold. The first is upcoming GPU architecture called Volta. Volta incorporates another technology called NVLink and a very high bandwidth-stacked memory.

    Sumit Gupta, general manager of Tesla accelerated computing at NVIDIA, said it can achieve up to 40 petaflops per node per server, or roughly 10 times more than a system today.

    “In three years, we’ll create a server with 10 times performance,” Gupta said. “Summit will be five times as powerful as Titan but at a fifth the size.”

    NVLink is an interconnect for GPU, allowing a point-to-point connection between GPUs or between GPU and Power CPU. Co-developed by IBM, it increases data flow. It will first be introduced into products in 2016.

    Mellanox is providing the inter-host communication network, network management and InfiniBand. “We’re enhancing network capabilities to enable extreme-scale computer systems,” said Richard Graham, senior solutions architect at Mellanox. “Our goal is to reduce overall power consumption network by 50 percent, 80 percent in the longer term.

    9:00p
    Only 5% of Americans Unaware of Government Surveillance Programs in Post-Snowden Era

    logo-WHIR

    This article originally appeared at The WHIR

    More than 90 percent of Americans feel that they as consumers have lost control over how their personal information is collected and used by companies and the government. According to a study released by Pew Internet on Wednesday, American citizens feel very insecure sharing information via social media, text messages, email, or even using a landline, in the post-Snowden era.

    The study is the first in a series that examines Americans’ perceptions of privacy online following the revelations about the US government surveillance that came to light last year. The data is based on a survey conducted last January among a sample of 607 adults in the US.

    Only five percent of adults that participated in the survey had never heard of the government surveillance programs, and 43 percent have heard a lot about the programs. Most respondents, 44 percent, had heard at least a little bit about the government surveillance.

    While the government has been adamant that the data collection is used to protect the US from terrorist attacks and other threats, a report by the Washington Post in July showed that among the legitimate data collection there was also collection of data belonging to ordinary US and non-US citizens that included “startlingly intimate”- and irrelevant – data.

    According to Pew, 88 percent of adults believe that it would be hard to remove inaccurate information about them online, despite efforts from government to regulate requests from online users for removing inaccurate results from search engines.

    Beyond concerns about government surveillance, there are also worries about unsolicited data collection from third-parties. Eighty percent of respondents who use social networking sites said they are concerned about the data they share on social media being collected by advertisers or businesses.

    According to the survey, 64 percent of Americans believe the government should do more to regulate advertisers, while 34 percent think the government should not get more involved.

    Freedom online does have a price, however, with 55 percent of respondents willing to share some information about themselves with companies in order to use online services for free.

    The majority of American adults (61 percent) believe that they could do more to protect their data online, while 37 percent believe they already do enough.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/americans-hesitant-share-info-online-privacy-concerns-escalate-post-snowden-era

    9:30p
    Microsoft Issues Patch for Critical Vulnerability Affecting Windows Systems

    logo-WHIR

    This article originally appeared at The WHIR

    Microsoft has issued a patch to correct a critical vulnerability in its Microsoft Secure Channel or “Schannel” security package that could allow remote code execution on a Windows server or workstation.

    According to a security bulletin posted by Microsoft on Tuesday, the security update resolves the vulnerability discovered by an IBM X-Force Researcher that could allow specially crafted packets to remotely execute code on Windows servers and systems running an affected version of Schannel.

    Schannel plays the important function of encrypting traffic and transactions on most Windows platforms, and is the standard SSL library that ships with Windows. The new patch corrects how it sanitizes specially crafted packets, eliminating the vulnerability.

    In a report from DataBreachToday.com, Trend Micro technology and solutions vice president JD Sherry said the newly discovered Windows vulnerability would most likely impact Microsoft Exchange mail servers where Microsoft protocols like Schannel are used to encrypt mail traffic. Brian Evans, senior managing consultant at IBM Security Services, said any SSL services reachable from the Internet such as Web and email servers would be likely targets.

    Having found the vulnerability back in May 2014, IBM X-Force included coverage of the vulnerability with its network Intrusion Prevention System since reporting it. X-Force hasn’t found any evidence of exploitation of this particular bug in the wild, but Microsoft has stated that an exploit of the Schannel vulnerability is highly liable to be developed soon.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/microsoft-issues-patch-critical-vulnerability-affecting-windows-systems

    << Previous Day 2014/11/14
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org