Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, June 18th, 2015

    Time Event
    12:00p
    Custom Google Data Center Network Pushes 1 Petabit Per Second

    In a rare peak behind the curtain, a top Google data center network engineer this week revealed some details about the network that interconnects servers and storage devices in the giant’s data centers.

    Amin Vahdat, Google Fellow and technical lead for networking at the company, said Google’s infrastructure team has three main principles when designing its data center networks: it employs a Clos topology, uses a centralized software stack to manage thousands of switches in a single data center, and builds its own software and hardware, relying on custom protocols.

    Vahdat spoke about Google’s data center network design at the Open Network Summit in Santa Clara, California, Wednesday morning and wrote a blog post about it.

    The company’s latest-generation network architecture is called Jupiter. It has more than 100 times the capacity of Google’s first in-house network technology, which was called Firehose, Vahdat wrote. The company has gone through five generations of data center network architecture.

    A Jupiter fabric in a single data center can provide more than 1 Petabit per second of total bisection bandwidth. According to Vahdat, that is enough bandwidth for more than 100,000 servers to exchange data at 10 Gbps each, or transmit all scanned contents of the Library of Congress in under one-tenth of a second. According to a 2013 paper by NTT, 1 Pbps is equal to transmitting 5,000 two-hour-long HDTV videos in one second.

    Google is a pioneer of the do-it-yourself approach to data center network gear and other hardware. The company started building its own network hardware and software about 10 years ago because products that were on the market would not support the scale and speed it needed.

    Other massive data center operators who provide services at global scale – companies like Facebook, Microsoft, and Amazon – have taken a similar approach to infrastructure. If an off-the-shelf solution that does exactly what they need (nothing less and nothing more) isn’t available, they design it themselves and have the same manufacturers in Asia that produce incumbent vendors’ gear manufacture theirs.

    That Google designs its own hardware has been a known fact for some time now, and so has the fact that it relies on software-defined networking technologies. The company published a paper on its SDN-powered WAN, called B4, in 2013, and last year revealed details of Andromeda, the network-virtualization stack that powers its internal services.

    Therefore, it comes as no surprise that the network that interconnects the myriad of devices inside a Google data center is also managed using custom software. Google’s use of the Clos topology is also not surprising, since it is a very common data center network topology.

    Other keynotes at the summit were by Mark Russinovich, CTO at Microsoft Azure, who talked about the way Microsoft uses SDN to enable the 22 global availability regions of its cloud infrastructure, and John Donovan, SVP of technology and operations at AT&T, who revealed the telco’s plans to open source Network Function Virtualization software and hardware specs it has designed to enable its services.

    3:00p
    Telco FairPoint Expands Data Center Business in New Hampshire

    FairPoint Communications, equipped with several thousand employees and several hundred offices, has opened its second New Hampshire data center roughly one year after dipping its toes into the data center pool for the first time.

    The company has experience running telco-grade central office facilities, which it said lends itself to running mission critical data centers.

    “We’ve been running data centers for 15 years internally,” said Chris Alberding, vice president of products. “Our first data center was an expansion in an existing internal data center. We decided to put in some rack space and see how it does, and we sold half very quickly.”

    That experience was used as a business case and the seeds of a data center business. The company invested $2.5 million in renovating some 4,000 square feet in a recently opened data center in downtown Manchester.

    FairPoint is the latest in a string of traditional telecommunications companies choosing to increase their focus on data center services. Traditional moneymaking businesses like landlines have long been shrinking, so telecoms have to look elsewhere to offset the drop in revenue. Its data center business is just starting and remains a small portion of the 3,000-person company’s revenue.

    Alberding said that shrinking switching technology has opened up a lot of space. So, in an effort to repurpose it, the company currently offers the colocation basics of space, power, connectivity, and remote hands. This is symbolic of a wider move occurring as telecommunications companies evolve into technology companies, said Alberding.

    “It’s expanding our customer portfolio and products,” he said. “Our goal is to continue to look at gaps that customers have and need and what services we have and can build. Data centers were relatively easy ones and the first step in evolving.”

    Manchester is centrally located, with FairPoint touting its New Hampshire data centers as either a primary or secondary data center for those in a 60-mile radius. It also has a 16,000-mile fiber network that customers may tap.

    Beyond New Hampshire, the company has a big presence with customers in Maine and Vermont, two states where it hopes customers will tap the data center for disaster recovery services, according to FairPoint Executive Vice President and Chief Revenue Officer Tony Tomae.

    Alberding said that many businesses are showing high interest. The company said it is targeting startups to Fortune 100 companies and offering everything from half a rack to customized cages in a “pay-as-you-grow” license model. Instead of overbuilding on-premise space, FairPoint – and colocation in general – allows customers to pay for what they need rather than overbuild to accommodate future needs.

     

    3:30p
    OpenStack Really is Enterprise-Ready

    Orlando Bayter is CEO of Ormuco.

    With Nebula closing its doors and citing a lack of maturity in the OpenStack market as the reason, the debate about the leading open source cloud’s enterprise readiness has reignited.

    Nebula was a start-up led by a former NASA CTO that promised to deliver enterprise-grade private clouds based on OpenStack. It received a lot of press coverage, was well-backed by respected VCs, and signed up a good number of high profile customers. While its untimely demise is clearly a big story, it says more about the state of the vendor market coalescing around OpenStack than it does about the technology itself.

    A recent report by GigaOm found that one-third of cloud users use private clouds, half of which are built on OpenStack technology. What’s more, 65 percent of respondents agreed that OpenStack is now enterprise-ready and capable of handling mission-critical workloads. As if to prove this, PayPal announced in March that it has built its own OpenStack cloud, replacing VMware in its data centers.

    However, OpenStack is clearly causing something of a stir in the wider cloud market. In a move that many see as a direct response to OpenStack, Microsoft announced a private version of its Azure public cloud platform. Of course, many technology giants – including HP, RedHat and IBM – are tying their colors very firmly to the OpenStack mast. The real issue, I think, is the maturity of the vendor market, and confusion among buyers over OpenStack’s purpose.

    There Is No Public or Private Conundrum

    You could be forgiven for thinking, given the way the press covers the cloud battles, that all new cloud adopters are presented with a binary choice: Either you set up your own private cloud in a data center, or go to one of the big public cloud operators and have done with it. As is nearly always the case, the truth is more nuanced.

    GigaOm’s report found that hybrid cloud and multi-cloud adoption strategies are on the rise because these offer the best of both worlds.

    These are solutions that the “big 3″ – AWS, Microsoft Azure and Google – simply do not offer. They are selling public cloud IaaS and, as has been noted recently, are in a “race to the bottom” as they slash prices to gain share.

    OpenStack vendors are simply not in the same game as this trio. Sure, you can build a public cloud with OpenStack, and many vendors – including HP with HP Helion – have done so. But this is about offering their OpenStack private cloud customers the option to burst to a compatible public cloud for scalability.

    The public cloud is great if you’re starting from scratch and all you need is a cheap, high volume solution. If, like most companies, you have a data center full of legacy hardware running battle-hardened applications on which your business depends, the public cloud does not offer you many solutions for transitioning.

    In this space, the “Big 3″ are playing catch-up, hence Azure’s new private offering. Currently, if you want a private cloud that can burst to the public cloud, with one coherent ecosystem for development and deployment, and no vendor lock-in, OpenStack is the clear leader.

    The Thorny Issue of Vendor Support and Market Maturity

    PayPal’s adoption of OpenStack in its data centers certainly suggests that, at least for private clouds, the technology has reached enterprise-grade. But for companies without PayPal’s engineering muscle, building a private cloud all on their own, with no vendor support, is a nearly impossible challenge.

    With Nebula closing its doors, potential adopters may be wary to commit themselves to independent vendors, for if they disappear so too does the support they were providing. One can only feel sympathy for those companies running an OpenStack private cloud in a data center full of hardware they bought on Nebula’s say-so. But it could have been worse.

    As OpenStack is open source, and supported by many companies large and small, it’s not too difficult to get HP, RedHat or IBM – or one of their partners – to take over the reins. Indeed, this is exactly what Nebula’s CEO told his clients to do.

    If there is a worry about the maturity of the OpenStack vendor market, and the ability of providers to stay the course, the answer is – of course – to go with a vendor that is backed by a large partner company. Having access to a ready-made network of partners and resellers that can support you in case your original vendor goes down the tubes certainly gives peace of mind.

    While the demise of Nebula was unfortunate, these things happen when a new technology attracts independent start-ups that are at risk of running out of money before they achieve profitability. The fact remains that the OpenStack technology itself is certainly enterprise-ready, even if the vendor market coalescing around it remains immature.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

    4:00p
    ClusterHQ Brings Docker Container Support to EMC Storage

    Coinciding with the formal release of its Flockr data management software for Docker containers, ClusterHQ announced that it has integrated its software with ScaleIO and ExtremeIO storage systems from EMC.

    While most containers are deployed on top of virtual machines that are already tightly integrated with any number of storage systems, there is a growing number of instances where Docker containers are being deployed directly on top of physical servers that need to be integrated with storage systems.

    To make sure that EMC’s units can be employed in those scenarios, ClusterHQ CEO Mark Davis said, EMC and ClusterHQ have worked together to certify Flockr integration with EMC storage systems.

    Davis said the primary issue that IT organizations will have to contend with when embracing containers on physical servers is the I/O performance issues that result when server utilization rates increase dramatically. Where there may have been 20 to 25 virtual machines running on a physical server, there can be as many as 100 containers in that scenario. Each of those containers represents an application workload generating I/O requests. Davis said ClusterHQ and EMC have worked together to optimize I/O performance in data center environments running containers.

    “We’re a data volume manager,” said Davis. “Essentially, we’ve now built drivers for EMC storage into our software.”

    While the vast majority of Docker containers are being used in application development and testing scenarios today, a recent survey of 285 IT operations professionals conducted by ClusterHQ and DevOps.com found that 38 percent of respondents said they were already using containers to one degree or another in production environments. More significantly, 71 percent said they expected to be using containers in production environments in the next 12 months.

    The survey found that 92 percent of respondents were either using or investigating Docker, while 32 percent have used or investigated LXC containers, followed by 20 percent that had some experience with Rocket containers, created by a company called CoreOS.

    4:30p
    Breqwatr Upgrades Hyper Converged OpenStack Appliance

    Looking to make it simpler for IT organizations to stand up private clouds, Breqwatr unveiled a hyper converged appliance based on the OpenStack cloud management framework.

    Breqwatr CEO John Kadianos said that by bundling a distribution of OpenStack with its own set of Intel-based appliances, Breqwatr is making it easier for IT organizations to embrace private cloud computing using open source technology that would take them weeks to provision on their own.

    “We’re delivering a curated OpenStack distribution with integrated hardware,” said Kadianos. “We’re taking the guesswork out of OpenStack.”

    Now based on eight Intel E5-2660v2s that provide access to 160 Logical CPUs, Breqwatr Cloud Appliance 2.0 includes 24TB of solid-state drives configured within an object storage system along with 1,024GB of system memory.

    As an open source technology, new additions to the OpenStack project are made frequently. Breqwatr essentially serves as an arbiter that stands between IT organizations and the contributors to the OpenStack project to determine what specific components are ready to be deployed in a production environment.

    Built on top of the open source Kernel-based virtual machine software that is closely associated with OpenStack, the Breqwatr Cloud Appliance 2.0 includes components for virtual machine management, as well as chargeback and resource-based quota models. In addition, Kadianos said the Breqwatr appliance can be connected to a variety of third-party storage systems using an open application programming interface (API). Both NetApp and SolidFire are already certified Breqwatr partners. Breqwatr also has a partnership with Arista Networks, a provider of high-end, top-of-rack switches.

    Designed to scale out as additional compute resources are required and using what Kadianos described as a purpose-built control plane, other attributes of the appliance include a graphical user designed to make it simple to configure the Breqwatr Cloud Appliance.

    While not the only vendor to have created a hyper-converged appliance, Kadianos said that as IT organizations make the shift to OpenStack, the level of automation required to take advantage of those technologies has increased substantially. As a result, they will be looking to appliance vendors to help make a transition to a set of technologies in which many are unfamiliar. In time, Kadianos said Breqwatr may opt to make its software available separately. For now, it plans to continue providing an integrated hardware platform that is comparatively turnkey to deploy.

    4:38p
    Joyent Adds Non-Docker Services to Triton Container Cloud

    Joyent announced the ability to run container-native Linux images directly on bare metal with Joyent Triton, its bare metal container cloud. While running Docker is a focus of Triton, Joyent is extending its capabilities beyond Docker, its first major partnership being Canonical, the company behind Ubuntu, the popular Linux distribution.

    With container-native Linux developers can leverage operational efficiency of containers and run legacy applications and other data-intensive services without having to “Dockerize” them first, according to the two companies. Developers can tap Joyent’s downstream SmartOS in combination with the Ubuntu developer experience.

    Triton just recently entered general availability. The company employed a unique architecture in its cloud to make Docker containers run directly on bare metal, skipping the virtualization layer.

    It raised $15 million in October 2014 in part to help drive a business strategy that included Docker container cloud services.

    Triton is available for on-premise deployments or as a Joyent-run cloud service. It is compatible with all major Linux distributions, but Joyent and Canonical engineers have collaborated to produce certified, container-native Ubuntu images that are optimized for Triton.

    Solving the Linux Binaries Issue

    Getting Linux to run natively wasn’t easy, and some technological hurdles needed to be solved.

    “The big problem we solved is you need to be able to run Linux binaries,” Joyent CTO Bryan Cantrill said. Someone in the SmartOS community–SmartOS is Joyent’s homebrew cloud operating system — resurrected an old project that had been shelved and discovered it worked with a lot of applications.

    “So we took that technology and finished it,” said Cantrill. The company got it to run all Linux binaries on metal, at speed, in the context of its SmartOS “zones,” solving the problem of binaries. What came out of this effort was discovering the ability to run Ubuntu natively on bare metal.

    “We approached Canonical and they were enthusiastic about it,” said Cantrill.

    Hoping to Expand Triton’s Visibility

    The Ubuntu flavor of Linux is massively popular with the developer crowd. Upwards of 70 percent of Docker images are built on Ubuntu. It opens up Triton to a much wider audience, said Cantrill.

    Canonical will provide commercial support for the container-native Ubuntu images. “This gives them the same great Ubuntu experience developers love on their container infrastructure on top of SmartOS,” said Canonical CEO Jane Silber.

    Joyent is a techie’s cloud service provider – and its foundation is SmartOS. The company built the abstraction layer using a very different cookbook than other cloud providers, namely its deep ties to Sun Solaris. For this reason, Joyent has a lot of technological differentiation in the cloud world, but it also means using it comes with a steeper learning curve.

    Native Ubuntu provides a more comfortable option. “The problem is at Joyent, you had to get both SmartOS and containers,” said Cantrill. “By coupling up with Canonical Ubuntu, it brings the most popular OS to Triton.”

    The combination brings the “Docker world” to the SmartOS substrate, “and customers get the implementation details of Linux that make Triton easier,” he said.

    A Containerized Legacy

    Docker containers are less secure than virtualization, according to Cantrill, but Joyent believes it has fixed a lot of the security issues in its approach.

    “In order to run containers to run on metal, the substrate needs to be secure,” he said. “Joyent secure thanks to its concept of zones. SmartOS has been in multi-tenant production for a decade, with a proven track record.”

    Joyent’s history around containers goes much further back than the emergence of Docker. “The organizing principle at Joyent was around offering elastic compute as a service through OS containers, “said Cantrill.

    The importance of the SmartOS abstraction layer Joyent created is in its ability to effectively leverage the promise of operational efficiencies presented by containers, according to Cantrill. “If you virtualize the operating system instead of the hardware, you get much greater density and better performance,” he said.

    However, it wasn’t operational efficiencies that originally got people excited over Docker, said Cantrill, it was the developer experience. He believes containers will provide a similar or greater order of magnitude of efficiency than virtualization. It means packing much more stuff on servers than possible with virtualization, because you’re not virtualizing different instances of the application’s dependencies. The bare metal offering was created because virtualization affects performance of applications in Docker containers.

    5:00p
    DreamHost Improves Dedicated Server Performance with Solid State Drives

    logo-WHIR

    This article originally appeared at The WHIR

    DreamHost announced Wednesday that it has upgraded the hardware of its dedicated server packages, completing its transition to offer solid state drives with every coreDreamHost offering. SSDs and high-core-count CPUs are now options for DreamHost’s fully managed dedicated server customers seeking greater performance.

    All solutions on the “DreamServer” platform, which includes dedicated servers, shared hosting, virtual private servers, and DreamPress 2, are now powered by SSDs or include SSD storage options. The new “Summer Moon” configuration includes a 12-core Intel Xeon v3 (Haswell) processor, 16GB of RAM, and 240GB of SSD storage starting at $279 a month.

    The added hardware gives DreamHost customers a range of 4 to 12-core CPUs, 4 to 64GB of RAM, and 240GB of SSD or 1 or 2TB traditional HDD storage to choose from.

    “Frankly this was a long time coming,” Patrick Lane, DreamHost’s VP of data center operations and dedicated hosting product manager said in a statement. “Our dedicated servers have been popular for years with users who have wanted more power than shared hosting could provide but weren’t comfortable managing their own cloud compute instances on DreamCompute, our public cloud solution. Now they don’t have to make the choice – we’ll give them a managed environment with plenty of dedicated power to go with it. Boom.”

    As far back as November, DreamHost acknowledged customer demand for increased performance, when it added SSDs and Ubuntu Server 12.04 LTS to its VPS packages. The company also upgraded its shared hosting and DreamPress 2 offerings with SSDs in March, so their addition to DreamHost’s dedicated server packages was just a matter of time.

    MonsterMegs upgraded to SSD in May, among the latest of web hosting providers to do so.

    This first ran at http://www.thewhir.com/web-hosting-news/dreamhost-improves-dedicated-server-performance-with-solid-state-drives

    5:30p
    Shippable Formations Launches to Easily Test and Deploy Complex Docker Applications

    logo-WHIR

    This article originally appeared at The WHIR

    Shippable, a developer of DevOps tools for Docker applications, has launched its Shippable Formations product line which makes it easy for developers to see the status of all code versions running across the software stack, allowing them to detect code changes that caused bugs, and push out clean code with one click.

    Shippable Formations is designed to address major issues that slow down continuous integration and functional testing of complex Docker applications. It also simplifies deployment to any cloud provider and easy rollbacks to older versions, according to the company.

    Formations is available as Software-as-a-Service starting at $7 per container per month, and the company is also beta testing an on-premise version for companies that can’t store data remotely for security or compliance reasons.

    According to Shippable CEO Avi Cavale, Docker has made it incredibly easier to setup and run test environments. Traditional test infrastructure can typically take 15 to 30 minutes to bring up, causing many companies to amalgamate days worth of code changes into a single test. “Docker lets you bring [test environments] up in 20 seconds…This means you can do 30 to 40 test deployments per day,” he said.

    Google Kupernetes, for instance, does 150 to 200 daily builds using the Shippable platform.

    Being able to do tests more frequently means that conflicts caused by code changes can be identified and routed out more quickly.

    But Formations goes even further, accounting for the complexities of applications with different tiers. For instance, a three-tier application with Frontend, API, and Database tiers could have multiple conflicts between different developers and different tiers. Formations provides the insight to drill down into various application tiers to find code commits that might conflict with other parts of the application.

    Formations, Cavale said, is a “DevOps 2.0” solution that means developers no longer have to write code and scripts in unfamiliar languages to make deployments. Developers can run automated integration and functional testing against a real topology. Also, applications are decoupled from servers, giving developers full visibility into their applications without needing SSH access or separate configuration management, version management, or continuous deployment tooling.

    Formations is fully compatible with Docker Compose for multi-container topologies, and integrates with source code management tools such as GitHub and Bitbucket.

    Shippable Formations can also be used with CI tools like Jenkins, TravisCI, or CircleCI, or combined with Shippable’s own continuous integration service, Shippable CI/CD.

    This Shippable CI/CD service has also been updated this week to deal with more advanced Docker workflows. This includes the ability to use any Dockerfile or Docker image for builds and tests without having to provision your own build host, and to monitor changes to Docker images no matter who owns or manages them, and Google Container Registry integration.

    Running more than 60,000 containers weekly, Shippable had initially developed Formations for itself, and now it’s generally available. “As our customers started to adopt Docker and ask for features, we realized they were describing what we’d already built for ourselves,” Cavale said. “We look forward to continuing to work with the community and address the needs of developers.”

    Docker has been growing in popularity for some time. Earlier this month ProfitBricks launched a preview version of its Docker hosting platform which enables developers to build applications in the ProfitBricks cloud and access dedicated resources that can autoscale the Docker hosts.

    This first ran at http://www.thewhir.com/web-hosting-news/shippable-formations-launches-to-easily-test-and-deploy-complex-docker-applications

    6:35p
    Understanding how data center uptime impacts business revenue

    There’s really no question that the modern data center is tied directly to the successes of a business. In fact, reliance on the data center platform will only continue to grow as more systems move to cloud, more applications are deployed, and more devices connect into the data center. With this in mind, data center administrators are growing increasingly concerned with redundancy and uptime.

    Here’s the reality: Uptime is revenue. Here’s an interesting white paper that dives into how outages specifically impact business revenue and why critical uptime capabilities are so important. In fact, the average data center outage costs about $7900/minute, or roughly $474K/hr! With stakes this high, organizations must invest in better power management and data center resiliency solutions. Otherwise, you could be facing a serious outage bill.

    Data center services are only going to continue to become more critical for the modern business. This means that uptime will be an absolutely critical factor to consider for any data center architecture. Download this white paper today to learn about the real revenue impacts of an unplanned outage and where new solutions can ensure uptime and mitigate outage risks.

    << Previous Day 2015/06/18
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org