Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, October 17th, 2013

    Time Event
    11:30a
    VMware Adds vCloud Data Centers, Acquires Desktone

    At VMworld 2013 in Barcelona this week VMware expanded its Hybrid vCloud Service in the U.S. and the UK by adding vCloud data centers, vaulted into a leader position in the Desktop as a Service market by acquiring Desktone, and expanded capabilities for the software-defined data center. The VMworld event conversation can be followed on Twitter hashtag #VMworld.

    Hybrid vCloud Data Center expansion and Online Marketplace

    VMware announced two new locations where it will offer its vCloud Hybrid Service, in Santa Clara, California and Sterling, Virginia. These two facilities centers complement VMware’s existing data center in Las Vegas. Both locations are designed to serve the growing demand for vCloud Hybrid Service and comprised of high performance cloud architecture including fully redundant VM service, enterprise-class storage, and full network virtualization.

    “With the success of VMware vCloud  Hybrid Service, we are rapidly expanding service capacity with robust physical infrastructure that can scale to accommodate our growing customer demand,” said Bill Fathers, senior vice president and general manager, Hybrid Cloud Services Business Unit, VMware. “Customers now have additional strategic East and West coast locations from which to take advantage of VMware’s hybrid cloud service that is completely interoperable with existing infrastructure and enables new and existing applications to run without compromise.”

    VMware also launched a UK presence for its vCloud Hybrid Service, with a new data center location in Slough. VMware will offer a private beta of vCloud Hybrid Service in the UK in the fourth quarter of 2013, with general availability planned in the first quarter of 2014. The international infrastructure as a service (IaaS) expansion demonstrates the company’s commitment to providing European customers with a fast path to the cloud.

    “We have to be able to respond quickly to client requests,” said Angus Gregory, CEO of Biomni, a vCloud Hybrid Service customer and provider of IT service catalog and request fulfillment solutions based in London. “With vCloud Hybrid Service and our longtime IT partner Computacenter, Biomni can create environments in the hybrid cloud that are identical to our internal platform and seamlessly move workloads with complete confidence. The service enables Biomni to not only deliver a better quality and more responsive service for our clients and strategic partner Computacenter, but also allows us to grow more rapidly.”

    VMware also announced the vCloud Hybrid Service Application Marketplace – with access to 3,800 pre-qualified applications and rich online resources, where customers can discover, download and test-drive and buy solutions for the vCloud Hybrid Service. Customers can now find and deploy listings offered by VMware, buy critical software as a service (SaaS) solutions on vCloud Hybrid Service, find thousands of vCloud Hybrid Service virtual appliances, and read guidelines and best practices for bringing their own licenses to vCloud Hybrid Service.

    VMware acquires Desktone

    VMware announced that it has acquired Desktone, an industry leader in desktop-as-a-service (DaaS) with an advanced multi-tenant desktop virtualization platform for delivering Windows desktops and applications as a cloud service. The Desktone platform was purpose-built for service providers to deliver windows applications and desktops as a cloud service with unique capabilities such as multi-tenancy, self-service of virtual desktops, grid-based architecture for elastic scalability, and a low cost of delivery.

    “Desktone is a leader in desktop-as-a-service and has a complete and proven blueprint for enabling service providers to deliver DaaS,” said Sanjay Poonen, executive vice president and general manager, End-User Computing, VMware. “By bringing Desktone’s innovative platform in house, VMware can accelerate the delivery of DaaS through its network of over 11,000 VMware service provider partners while helping to shape and lead the future of the industry.”

    “The combination of VMware and Desktone’s global partner network will allow customers in all regions to benefit from the economies of scale provided by DaaS,” said Peter McKay, president and chief executive officer, Desktone. “With the Desktone platform already certified with the VMware vCloud technology, VMware vSphere and VMware Horizon View, customers will be able to quickly modernize and move their desktop infrastructure to the cloud and open new possibilities for customers, users and service providers.”

    Expanded capabilities for the software-defined data center

    VMware announced new capabilities and enhancements across its portfolio of cloud management solutions to simplify and automate management of IT services for multiple clouds and platforms. New product releases introduced include VMware vCloud Automation Center 6.0, VMware vCenter Operations Management Suite 5.8, VMware IT Business Management Suite, and VMware vCenter Log Insight 1.5. In addition, VMware will update the automation and management capabilities of VMware vCloud Suite 5.5. also announced new professional services to help customers outline the strategy and technology roadmap to define and implement cloud management and automation solutions. The new Cloud Automation Design and Deploy service focuses on policy-based automation, enabling customers to build cloud management infrastructure for improved time-to-project completion, operational reliability and efficiency. Enhanced VMware Accelerate Advisory Services help organizations evaluate the value of cloud operations and cloud business management solutions.

    “For IT to keep pace with business demands, stay relevant and deliver IT services with agility, it must transition from being builders to brokers of IT services,” said Ramin Sayar, senior vice president and general manager, Cloud Management, VMware. “VMware cloud management solutions enable IT to deliver this agility while standardizing and ensuring governance and control – whether the goal is to better manage a highly virtualized environment, build a vSphere-based private cloud, extend to the hybrid cloud or broker services across many providers.”

    12:30p
    Puppet Labs Helps Visualize IT Infrastructure Changes In Puppet Enterprise 3.1

    Puppet Labs has released Puppet Enterprise 3.1, which includes several enhancements including a powerful reporting tool that inspects infrastructure changes. Puppet 3.1 also now supports Google Compute Engine (GCE) and Red Hat Enterprise Linux (RHEL) 4.0, further expanding interoperability with a range of enterprise IT platforms.

    The addition of Google compute support enables customers to automatically provision, configure and deploy to GCE cloud nodes with a single command. In terms of cloud support, this is in addition to support for Amazon Web Services and VMware cloud platforms.  The company is mighty close to VMware. and separately announced enhanced integration with VMware vCloud Automation Center 6.0. This should come as no surprise, given ongoing VMware investment into Puppet.

    RHEL 4.0 support expands support for enterprise platforms, joining already supported Microsoft, Windows, AIX, Solaris, Debian and Ubuntu. It already supports RHEL 5.0 and 6.0 as well.

    Event Inspector is Big Release Addition

    Perhaps the big addition Puppet Enterprise 3.1 is the event inspector, an interactive reporting tool that provides a visualization of IT infrastructure changes to system administrators. The new reporting tool allows sysadmins to interactively inspect and change events in multiple ways, providing them with greater insight into infrastructure change

    The reporting tool provides actionable insight to troubleshoot and resolve issues, reducing risk of downtime and accelerating time to resolution.

    “With the event inspector in Puppet Enterprise 3.1, we’re addressing a huge challenge for operations teams managing IT infrastructure changes,” said Luke Kanies, founder and CEO of Puppet Labs. “System administrators typically have to sift through numerous log files and use ad hoc scripts to understand changes and their impact. The event inspector delivers an opinionated reporting tool to help them quickly identify the ‘what,’ ‘where’ and ‘how’ of changes, understand the impact, and then take action.”

    Also new to Puppet Enterprise 3.1 is powerful new capabilities for advanced/power users, including support for rebooting Windows after package installation, and discoverable and configurable classes and parameters from within the Puppet Enterprise GUI console.

    1:30p
    Next-Generation Data Centers Require Next-Generation Security

    cloud-security-470

    We live in a world full of technological buzz terms – cloud computing, software-defined technologies, and now, next-generation security. The challenge with understanding new types of technologies is that the marketing machine usually takes charge before there has been any serious explanation.

    The term “next-generation” security was born as a direct result of new types of technologies requiring greater levels of security flexibility. What does that mean? Communications happening over the cloud, or over a bring-your-own device, or even through a virtual portal all have new types of requirements that older security platforms just could not meet.  Furthermore, at the core of all of these new technologies sits the data center. With so much more reliance around the modern data center infrastructure and everything that it hosts, security platforms had to make that next-level jump.

    Next-generation security technologies are much more than your standard firewall. These are intelligent devices which are application, cloud and user aware. These are special services, new policies and complete virtual appliances logically located throughout an environment.

    So, what are these new types of security products? Let’s take a look at a few:

    • Security Beyond the Physical: We’ve come far beyond the standard physical firewall. Now, security appliances are being deployed at various nodes within a network – internal, external, at a cloud site, or in a DMZ. Some of these appliances can be physical, while others are completely virtual. The flexibility of virtual security appliances means more control over networks, traffic flow and even policy creation. Furthermore, these appliances can be logically located inside of a network running special policies or at the edge protecting cloud-facing applications. New physical content delivery appliances allow you even virtualize some security services directly on top of the platform. For example, the NetScaler SDX allows for a virtual WebSense service to run directly on to. This allows for things like DLP and even greater application awareness.
    • New Types of Policy Engines. The world of cloud computing requires new types of security engines. Layer 4-7 DDoS protection (volumetric and application-layer), intrusion prevention/detection services (IPS/IDS), and data-loss prevention (DLP) are just a few examples of some advanced protection features. These new engines must scan multiple points within and outside of a network. Furthermore, organizations with heavy regularity compliance measures have to be even more careful with their data. Some healthcare organizations use DLP technologies which scan data leaving and coming in. From there, they scan for patterns, ‘xxx-xx-xxx’ for example, to flag, stop and report malicious data leakages. Next-generation security platforms are designed to help stop data loss by integrating into various technologies – including software-defined networks. These policy engines allow for granular data-flow control as core information flows between the end-user, your data center, and the cloud.
    • Cloud-Ready Endpoint Control. As new devices try to connect into a corporate network, there has to be some means of control. Now, border security devices are being deployed with advanced interrogation engines capable of granularly scanning all inbound devices. Organizations can place certain policy metrics and present only certain content if those policies aren’t met. Checking for rooted devices, the right service pack, or even the latest A/V can all be set as interrogation points. Further control can be derived from the use of mobile/enterprise device management (MDM/EDM) solutions. Having the capability to remotely locate or wipe a stolen or lost device can be very handy.  Remember, trends around IT consumerization and mobility are only going to continue growing. This means more users will be utilizing the device that helps them be most productive. It’ll be up to your data center’s next-generation security model to help delivery those resources and keep them secure.
    • Software-Defined Security. Now that security devices are being distributed to multiple points, new types communications methods are being established to create a faster and more secure cloud environment. Closely in conjunction with software-defined networks (SDN) creating site-to-site secure connections is now a must. Many organizations are utilizing a public or hybrid cloud platform which may require a virtual security appliance to be deployed at the provider site. From there a physical or virtual appliance at the corporate site can be used to create a secure, monitored, tunnel into the cloud. Remember, next-generation security platforms are not only cloud and application aware, they provide layer 4-7 networking services and data protection. The idea is to create app-awareness, increase control and create flexibility around your environment to help facilitate an ever-evolving business model.

    There are going to be lots of different definitions out there for next-generation security. It’s important to understand, however, the core meaning of the technology. Security products have simply evolved beyond the standard firewall platform into something that is capable of supporting numerous different types of services. In many cases these services all work together to bring forward a singular platform – cloud computing, for example. Next-generation technologies will always heavily revolve around security, agility, and the ability to evolve (quickly) to the needs of a growing business. As more distributed technologies take form in the industry, there will be a greater need for dynamic –cloud-aware – security solutions.

    1:46p
    With Havana Release, OpenStack Adds Enterprise Features
    openstack-cloud

    The OpenStack Foundation today announced the release of Havana, the eighth version of the open source cloud platform.

    The OpenStack Foundation today announced the release of Havana, the eighth version of the open source cloud platform. Havana provides a step forward in app-driven capabilities, an improved operational experience and additional enterprise-grade features. The big two projects that are now fully integrated are Metering & Monitoring (Ceilometer) and Orchestration (Heat). These were both incubated in the previous release, dubbed Grizzly.

    Metering and Monitoring (Cielometer) provides a central collection of metering and monitoring data. An example given is collecting usage information for billing systems to determine which workloads are heavy customers.

    “The metering and monitoring is critical for anyone who is running a cloud,” said Jonathan Bryce, Executive Director of the OpenStack Foundation.”It gives you visibility into storage, networking, and compute, and is able to aggregate all of that together to a central point. It also allows you to do basic alerting. This is useful from an administration perspective, but one of the big uses of this is with the orchestration project”

    Orchestration (Heat) is template-based orchestration engine for OpenStack. For example, developers define application deployment patterns that specify all of the infrastructure resources an app needs.”It allows an application developer to describe all the resources he needs to run,” said Bryce. “You can automate auto-scaling.”

    Object Storage Goes Global

    One feature that is a little more under the radar but of significant importance is global clusters for object storage.

    “The global clustering feature allows you to take your object storage environment — a cost effective system to backup – and run across several data centers,” said Bryce  ”Up to now, you’ve been able to run a robust distributed storage system in a data center, or data centers in close proximity. Now you can do it all over the planet – and it acts as a single cluster. This is a pretty impressive feat. Now you can do geographic load balancing, and deliver content from a physical location closest to your users. You could use this to build a CDN, or a private CDN for a worldwide enterprise. If you have a lot of user generated content, you can distributed it out, and it behaves as a single, logical OpenStack environment. Also, If you want a really robust disaster recovery plan, you need that geographical element to it. This takes your disaster recovery to an entirely new scale. It’s a cluster, in-sync, managed across the entire infrastructure.”

    The user interface has also been improved, with more functionality exposed on the dash board. There’s continued support for additional plugins as well as metered usage statistics.

    Havana also adds some very enterprise-friendly features. All APIs now support SSL encryption,  Virtual Private Networks and Firewall as a Service. You can now boot from volume, for live migration, and there’s added support for rolling upgrades.

    “Community Has Matured”

    OpenStack is heavily pushing community involvement and education.”Because the community has really matured, we’ve been able to get in the major fixes and bug fixes on time, and as promised,” said Bryce. “It’s really cool to see one of the largest collaborations work so well.”

    There were 910 contributors to Havana, a more than 60 percent increase from the Grizzly release. There’s a total of 392 new features, representing a 32 percent increase in the total lines of code from April to September. Over 20,000 commits were merged, and on average, an OpenStack cloud deployed for testing occurs 700 times per day.

    “The foundation was formed last September, and one of our top priorities at the summit was around education,” said Bryce. “We launched a training marketplace, as well as training courses in close to 40 countries. We’re finding that there’s an incredible demand for OpenStack skills.”

    2:00p
    Cloud Channel Summit to Engage Cloud Vendors, Hosting Companies

    The Cloud Channel Summit, set for November 4, will offer participants an opportunity to focus on building successful alliances in the cloud.

    Leading cloud vendors will be on hand to talk about enlisting channel partners to enhance their solutions, especially with an eye toward meeting the needs of geographic and vertical markets. The one-day event will be held at the Computer History Museum in Mountain View, CA.

    The Cloud Channel Summit aims to provide a valuable meeting place for channel and partner development executives to share industry best practices and establish new relationships. The event organizers plan to draw key channel and business development executives from major Cloud vendors, VARs, ISVs, SIs, hosting companies, and other service providers. DCK Readers will get a discounted price for the event, when registering through this Cloud Channel Summit link.

    Speakers will include:

    Brian Matsubara
    Head of Global Technology Alliances, Amazon Web Services

    Chris Rimer
    Global Head of Partner Business, Google Cloud Platform, Google Enterprise

    Adam Nelson
    Head of Channel Sales and Partnerships, Dropbox

    Sanjay Sharma
    Director of Business Development, HP

    Darren Cunningham
    VP, Marketing, SnapLogic

    Sean McCaffery
    Vice President of Channel Sales and Operations, ViaWest

    Jonathan Sass
    Category Manager, Cloud Services and Software, Spiceworks

    Daniel Saks
    President and CEO, AppDirect

    Topics of discussion to include:

    • Winning channel programs – what’s working and what’s not?
    • How to build and leverage a cloud ecosystem?
    • Understanding Cloud vendor/channel partner roles and relationships
    • What are the economics of a winning Cloud partnership?
    • Who owns the customer? Building collaborative sales and support models
    • Channel/partner success stories

    To receive the discounted price for the one-day event, register through this Cloud Channel Summit link.

     

    2:15p
    How the IT Giants Stay at the Cutting Edge

    Kyle Bittner coordinates IT asset recovery efforts as the business development manager of Ex-IT Technologies.

    Kyle-Bittner-tnKYLE BITTNER
    EX-IT Technologies

    The IT industry is fiercely competitive, with data centers striving for the top position in effectiveness, efficiency and stability. With costly new technologies appearing often, it can be difficult to stay ahead. IT departments, large and small, know the key to a seamless upgrade path is the effective resale of aging equipment and components to fuel new purchases.

    Budgeting the Refresh Cycle

    A common upgrade tactic of IT giants is getting a valuation on current equipment in operation prior to an upgrade. The company combines their upgrade budget with the value of current equipment to finance a larger purchase. An accurate valuation depends on using an asset recovery company experienced in the retiring equipment. It is also important to discuss the time frame, since the value of used assets declines over time.

    Another way to gain funds for the new purchase is memory. Often data centers will outgrow their previous operating systems and need to run multiple ones using virtualization software. That software requires more memory. Since systems have a set number of memory slots, it is necessary to buy higher-density modules to replace the low-density ones. If the old modules cannot find a home in a server with open slots, they can easily be sold to an asset recovery company and converted to cash.

    Value of Old Equipment

    The green push for efficiency has relegated power-intensive systems in favor of low-power options. Does that mean the old systems with 1.8 volt or 1.5 volt memory are obsolete, and destined to levy a disposal cost? Don’t let the recyclers send a bill until the systems have been valued from a working basis. Even though a lot of the major data centers and corporations push for the cutting edge, there is a strong resale market for refurbished product. Whether a corporation is expanding with capital constraints, or other countries purchasing with an older technology base, there is a liquid market for used servers, networking equipment, and client systems.

    Using the Right Company for the Job

    IT giants are the best at what they do, and they focus on that by diverting non-core tasks to others. For that reason, data centers will work with an IT asset recovery company to handle all of the retired equipment, so they don’t have to do the work themselves. The recovery company knows what’s of value and therefore worth spending time on, and what should be responsibly recycled. Some recovery companies have repair capabilities and maximize return even on formerly defective equipment. Data security is a huge concern for IT companies, so they use a certified company with data destruction capabilities to ensure their brand is protected from information leaks.

    The continuous purchase of new equipment can be fueled by the effective resale of the old by using an IT asset recovery company, that’s how IT giants profitably stay at the cutting edge.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:00p
    Cyan Adds Blue Orbit Ecosystem Partners

    Cyan expands its Blue Orbit ecosystem of SDN and NFV partners, and helps Colt deliver on its next-generation modular multi-service platform.

    Blue Orbit Ecosystem additions

    Cyan (CYNI) announced that Connectem, Mellanox, Metaswitch Networks, and Red Hat have joined Blue Orbit, an ecosystem of partners focused on delivering real-world, multi-vendor SDN and network functions virtualization (NFV) applications. The announcement was made during the SDN and OpenFlow World Congress event this week in Frankfurt, Germany. During the event Cyan, along with its ecosystem partners, will demonstrate two use cases: data center to data center SDN with Mellanox and Red Hat, and NFV Orchestration of vEPC (virtual evolved packet core), vRR (virtual route reflector), and vDNS (virtual domain name system) with Connectem and Metaswitch.

    “With Blue Orbit, the SDN and NFV supplier community is really stepping forward to prove interoperability in multi-vendor environments, and demonstrate that SDN and NFV can be leveraged to perform key functions critical to network operations around the world,” said Joe Cumello, chief marketing officer, Cyan.  “We welcome our newest members and look forward to the insights their participation will provide network operators looking to lessen risk and accelerate the delivery of production SDN infrastructures.”

    Colt automates Service Delivery

    Cyan also announced that Colt will be launching its next-generation modular Carrier Ethernet Multi-Service Platform (MSP) network building on Cyan’s Blue Planet software and Z-Series platforms. The approach taken is a step-change in the way hybrid Ethernet and IP services are delivered and paves the way for SDN and network functions virtualization (NFV). Colt’s Modular Multi-Service Platform is designed to meet the ever-increasing demand for bandwidth, while maintaining strict service requirements and also allowing for success-based growth and expansion over time.  Cyan technology is also supporting Colt’s efforts to connect data center hosted compute resources with network resources to enhance end-to-end performance and develop the flexible and agile service types that CIOs demand today.

    “Businesses count on us to deliver critical managed IT services, cloud services, and communication solutions that provide a superior customer experience,” said Luke Broome, CTO of Colt.  “A necessary part of delivering that experience is evolving and automating our network infrastructure and deploying best-in-class technology in a flexible and modular multi-vendor environment that will fuel innovation.  Cyan technology is an important element in allowing us to automate, provision, and manage Ethernet transport and Carrier Ethernet services regardless of the vendor equipment we choose, scale our metro network resources, and provides us with a path to future SDN and NFV-based applications and services.”

    6:15p
    Chaos Kong is Coming: A Look At The Global Cloud and CDN Powering Netflix
    jedberg1

    Jeremy Edberg, who runs the site reliability team at Netflix, discussed the company’s approach to infrastructure and DevOps in a talk this week at the O’Reilly Velocity Conference in New York. (Photo: Colleen Miller)

    NEW YORK - Netflix is continuing to expand its infrastructure, both on the Amazon Web Services cloud and in data centers around the world. As part of this expansion, the famous Chaos Monkey and Chaos Gorilla are getting a beefy new relative: Chaos Kong.

    The streaming video titan’s infrastructure was the focus of a presentation at O’Reilly Velocity 2013 NYC conference by Jeremy Edberg, who heads the site reliability team at Netflix. Edberg, who was also the first paid employee at Reddit, gave a wide-ranging talk on how Netflix manages its huge operation and the role of developers in the process.

    Netflix sees about 2 billion requests per day to its API, which serves as the “front door” for devices requesting videos, and routes the requests to the back-end services that power Netflix. That activity generates about 70 to 80 billion data points each day that are logged by the system.

    “We like to say that we’re a logging system that also plays movies,” said Edberg. “We pretty much automate everything. That’s really the key. When there’s an operations task, we try to figure out how to automate it.”

    Simian Army Keeps Growing

    That includes the Chaos Monkey, a resiliency tool that randomly disables virtual machine instances that are in production on the Amazon cloud. The goal is to engineer applications so they can tolerate random instance failures. It’s one of a suite of Netflix tools known as the Simian Army, which also includes the Chaos Gorilla, which disables an entire AWS Availability Zone. Each of Amazon’s regions includes a number of Availability Zones (AZs) to allow users to create failover options in the event of a local outage.

    Since Netflix is now running in three different Amazon regions (Virginia, Oregon and Europe-West in Dublin), it has developed Chaos Kong, a tool that will simulate an outage effecting an entire Amazon region and then shift traffic to the remaining regions. Netflix uses Amazon’s Reserved Instances to ensure that it will have capacity available for a wholesale shift of traffic from one region to another.

    Here’s a roundup of some of the other key points from Edberg’s talk:

    Powered by Cloud (and a Huge CDN): Netflix has become the poster child for Amazon Web Services and the cloud-driven company. But much of its content is served through data centers. “We say we run everything in the cloud, but that’s really just the control plane,” said Edberg. “All the video bits are coming from a CDN we’ve built. We have servers running all around the world in remote data centers.” The Netflix CDN, known as Open Connect, is housed in 21 data centers around the world – including facilities from Equinix, Telecity, Telx, Telehouse, CoreSite, Verizon/Terremark and Global Crossing – as well as ISPs and networks. Proxy services handle the “conversations” between AWS and the data centers. Open Connect uses a 4U appliance built and designed by Netflix and using components from Supermicro, Intel, Hitachi and Seagate.

    DevOps At Netflix: The developer teams at Netflix deploy upwards of 100 releases per day. The company follows a “DevOps” model in which developers both write and deploy code. You build it, deploy it, and if you break it, you fix it, said Edberg. “We hire responsible adults and trust them to do what they’re supposed to be doing,” he said. “It works pretty well. The developers get to deploy into production whenever they want. If something breaks, you also have to fix it, even if it’s 4 am.”

    Not that this process doesn’t get interesting at times. “(Developers) are good at knowing the risk to their service,” said Edberg. “One of the downsides with this distributed infrastructure is that you may not always know how your changes will affect downstream or upstream dependencies.” While many services are limited in their scope – and hence the amount of trouble a wayward deployment can create – a configuration tool known as Fast Properties allows developers to broaden the scope of their system changes.  “A non-trivial amount of outages are due to Fast Properties changes where someone deploys globally or doesn’t understand a dependency,” said Edberg. “We’re trying to make it smarter.”

    Redundancy and the Rule of Three: ”We never ever save data on a single machine,” said Edberg. “We always try to make sure we have three of everything. We’re going to ask you to run things in three availability zones, so they run in three data centers.” The company’s Cassandra database architecture runs in three different regions.

    For all of Netflix’s technical accomplishments, Edberg its business model creates a challenge: the actual cost of downtime is hard to calculate. The company’s revenue is based on monthly subscriptions, rather than daily or hourly transactions. Cancellations are the key metric, and can’t be neatly attributed to downtime.

    Want to use the Netflix tools mentioned in this article? Check out Netflix on GitHub to see open source versions of some of these tools.

    << Previous Day 2013/10/17
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org