Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, June 23rd, 2015

    Time Event
    12:00p
    Docker Wants to Make the Internet Programmable

    SAN FRANCISCO – Few people know that while Docker the company may be only about two years old, work on the technology behind it has gone on for about eight years.

    Its approach to packaging software in a way that makes it easy to quickly take an application from the developer’s laptop and deploy it in production in the company’s data center or in a public cloud has drawn a lot of attention from developers and DevOps professionals, and nearly every major IT vendor and service provider has been eager to get on board by either partnering with Docker or at least supporting the technology.

    Over the two years that Docker has been in existence, its container software has been downloaded about 500 million times, according to the company’s CEO Ben Golub. About 150,000 “Dockerized” applications exist as of this month, and there are about 40,000 projects that use Docker on GitHub, the popular repository for open source software.

    Docker already touts a long list of big-name companies either trying the technology out or (a much rarer case) using it in production. That list includes eBay, Baidu, Yelp, Spotify, Capital One, the New York Times, and the U.S. General Services Administration.

    Making the Internet Programmable

    But it took a lot longer than two years to get there. Solomon Hykes, Docker founder and CTO, said his team has been working on the technology, “trying to make it work,” for eight years.

    Both Hykes and Golub were keynote speakers at the company’s second annual DockerCon conference in San Francisco Monday.

    Their goal, ultimately, is to make the internet programmable.

    “The internet is a pretty sweet piece of hardware,” he said. “It’s probably the coolest piece of engineering that we as an industry have created.”

    The network has been around for about 50 years. It has continued to scale, and nobody has ever rebooted it for maintenance.

    Instead of programming individual devices or systems that are connected to the internet – servers, phones, TVs, cars, sensors, drones, and so on – people should be able to program across all of them. “Is it possible we can take all of that … and make the whole thing programmable?” Hykes asked from the stage.

    Five Years to Reach the Goal

    A developer today is forced to choose particular platforms and write for them, which is why Docker is building an open software layer it hopes will eventually enable developers to program whatever device is connected to the internet. And they plan to get there in five more years.

    “We’ve been doing it for eight years, so what’s another five?” Hykes said.

    There’s a consensus that while some end users have managed to build things around it to make it work in production for them, Docker is not ready for prime time just yet. Hykes admitted that there were still a lot of bugs that had to be worked through.

    The response from the developer community, however, is a sign that the company’s vision is on point. “Honestly, we think it’s working,” he said. “We think it’s the right way to do it, and we’re going to keep doing that.”

    Besides bugs, there is still a number of big infrastructure questions the Docker ecosystem has to answer. The company offered answers to some of them at DockerCon, introducing software-defined network functionality for Docker containers, a new plugin architecture, and improvements to orchestration tools.

    It Takes a Village

    It’s also not alone in plugging the holes that need to be plugged. In addition to the developer community contributing to the open source project, major vendors and tiny startups are building solutions that make Docker stronger as a platform for production applications.

    IBM, VMware, and Google, for example, all announced new capabilities around Docker containers Monday.

    Google launched a beta release of Container Engine, its cloud service where developers can spin up containers the way they spin up cloud VMs today. IBM launched Docker-based container services on Bluemix, its Platform-as-a-Service offering. VMware introduced AppCatalyst, a hypervisor that simulates a private cloud on a developer’s laptop and includes Docker’s engine for creating hosts for Docker containers.

    Finding Common Ground

    To ensure healthy growth for the ecosystem around application containers, unencumbered by fragmentation, Docker teamed up with CoreOS, Google, Microsoft and a host of other startups and heavyweights to launch a container standardization effort that’s independent from any one vendor.

    There have been fears of fragmentation since late last year, when CoreOS, another rising star on the intersection of software development and IT operations, introduced a container standard and runtime it said were superior to Docker’s. The new Open Container Project with vendor-agnostic governance that’s part of the Linux Foundation seems to provide at least some degree of resolution to what many have referred to as container standard wars.

    As if to demonstrate that CoreOS and Docker are back to speaking terms, Hykes shook CoreOS CEO Alex Polvi’s hand from stage Monday, giving him credit for being a major driving force behind the new standard organization.

    3:30p
    Disaster Preparedness Strategies for Recovery Assurance and Peace of Mind

    Dave LeClair is VP of Product Marketing at Unitrends.

    The start of hurricane season is always a great reminder about the importance of having an iron-clad disaster recovery strategy in place to protect vital data, systems and infrastructure, and to maintain business continuity in the event of an outage. Even a small amount of downtime can be detrimental to a business. According to the 2014 State of Global Disaster Recovery Preparedness report, the cost of losing critical applications to system outages can be as high as $5,000 a minute.

    Most companies today have some sort of a disaster recovery “plan” in place, but many lack specificity and fail to take into account various types of disasters. For most of us, when thinking about events that can affect business operations, natural disasters – like hurricanes, tornadoes, fires and floods – often come to mind. And while these are certainly possibilities, most outages are caused by much less extreme factors, such as hardware failure, file corruption, cyberattacks and human error. True disaster preparedness means anticipating all different types of disasters and then developing customized plans for each that enable a business to maintain operations no matter what is happening around them.

    With this in mind, here are three tips to consider when developing and implementing your company’s disaster recovery strategy.

    Your People Are Your Most Important Asset

    Obviously, the well-being of your employees in a disaster situation is more important than anything else, and establishing safety protocols and procedures should be your first priority. From there, identify key operational personnel – those people without whom your business can’t operate – and provide them with the ability to work remotely or from a secondary location when a disaster strikes. Determine the steps that will be required to get those employees online and communicating with each other in the event of an outage, and make sure they have quick and easy access to the business-critical data, systems, servers and other infrastructure they need to keep the business running.

    Consider creating a contact database that includes the names, phone numbers and email addresses of all personnel who have a role in disaster scenarios, so in the event one mode of communication is down, you’ll be able to reach them via an alternate method. Instituting a phone tree or automatic notification system, as well as a chain of command to keep processes running smoothly, are also good ideas.

    Remember to look beyond IT when identifying personnel that will play a key role in disaster situations. Successful recovery plans apply to all critical business departments and connect the appropriate people in the right way.

    Once you’ve identified key personnel, turn your attention to the vital operational processes that each employee needs to handle in an outage scenario. It’s important to provide step-by-step guidelines outlining each person’s responsibilities, clearly communicate those to each employee and then schedule continuous practice sessions to ensure each role will be executed flawlessly.

    Technology is the Foundation Upon Which Your Plan Should Be Built

    Backup and replication technologies are the foundation of any modern disaster recovery plan. IT personnel should work with their executive team and other key stakeholders to identify their business-critical infrastructure – data, systems, servers and applications – and make sure they are being backed up and replicated to an offsite location.

    Backing up and replicating data to a secondary site provides an added layer of redundancy. In the event your primary data center goes down, critical information and systems are still available via that secondary site, and business operations can proceed unaffected. More companies are turning to the cloud for backup and replication because it provides a fast, cost-effective and efficient way of storing data, systems and infrastructure, and enables retrieval of critical assets within minutes of a declared disaster.

    It’s also a good best practice to store your disaster recovery plan online or in the cloud where your people can access it anytime, anywhere in the event of a disaster, rather than just locally somewhere inside your facility. If a disaster strikes, you may not have access to your facility and the information within.

    Once everyone agrees on what infrastructure must be protected, you need to define recovery time objectives (RTOs) and recovery point objectives (RPOs) to ensure data, systems, servers and other infrastructure can be recovered within required service level agreements (SLAs). RTOs, the amount of time a system can be down without causing too much damage, and RPOs, the amount of data that your company can afford to lose without severe consequences, are where it all begins from a data recovery standpoint. Recovery processes that align with defined RPOs, RTOs and SLAs means minimal revenue loss and brand damage if the unthinkable occurs. Every part of your disaster recovery plan – people, processes and technologies used – should work toward meeting your RPO, RTO and SLA requirements.

    Testing is Vital to the Recovery Process

    Because business needs and data centers are constantly evolving, it’s important to regularly review your disaster recovery plan to ensure its continued relevance. Consistent testing is also required to ensure employees will execute their roles flawlessly, and data protection technologies will restore business-critical assets within specified RTOs, RPOs and SLAs. Think of your disaster recovery plan as a living document, one that needs to be constantly modified to ensure continued effectiveness as your business grows and requirements change.

    As much as possible, use automated processes to foolproof your disaster recovery response. There are new recovery assurance tools available that automate disaster recovery testing to ensure that your environment is certified and ready at all times. The bottom line is that a disaster recovery plan is worthless if you don’t test it. You’ll never know if it will work when you need it to, and you don’t want to find out it has failed after a disaster has occurred. Treat disaster recovery testing as a standard business practice. Set a goal to review your plan, processes and technology on, at least, a quarterly basis. Automated testing can allow you to test daily, if desired.

    Recovery Assurance is Priceless

    The goal of any disaster recovery plan is to minimize operational risk in the face of downtime, an outage or a disaster. While successful recovery strategies often require an upfront investment in time and budget, you’ll find that you can’t put a price tag on recovery assurance. Knowing vital corporate assets are protected and recoverable, and business operations will remain unaffected regardless of the situation around you, provides IT and executive professionals with peace of mind that many would argue is priceless.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:47p
    VMware Embeds Docker Container Capabilities in Hypervisor

    Moving to integrate Docker containers as tightly as possible within a hypervisor, VMware on Monday previewed Project Bonneville, an effort that makes it possible to download and isolate containers using VMware cloning software. The company made the announcement at DockerCon in San Francisco.

    Jared Rosoff, senior director of engineering for cloud-native apps at VMware, said the goal was to streamline IT operations workflow in data center environments that have already made massive investments in virtual machines but now also need to embrace Docker containers.

    Available on any operating system, Project Bonneville software, coupled with instant cloning, will create a VM that is light enough to support a single instance of Docker, while allowing IT organizations to preserve all their existing investments in VMware management software, he said.

    He noted that for years to come IT operations teams will have to manage workloads that run on both Docker and VMs. Rather that have two separate management frameworks, VMware has created a hypervisor optimized for Docker that serves to make Docker containers not only easier to manage, but also more secure, Rosoff said.

    The end result should be a sharp reduction in the need for IT organizations that have standardized on VMware management software to have to master and deploy management frameworks that are specifically optimized for Docker containers.

    “We want to make it easier to manage and run Docker in product environments,” said Rosoff. “To help make that happen we’ve opened up our hypervisor.”

    In addition to Project Bonneville, VMware also previewed a new hypervisor that is purpose-built for developers. Accessible via a REST API or command line interface, the AppCatalyst hypervisor enables developers to replicate a private cloud locally on their desktop to make it simpler to build and test containerized and microservices-based applications.

    Making use of Project Photon, an open source minimal Linux container host, Docker, and VMware’s Vagrant workflow management software, AppCatalyst is available today for Mac OS X as a free download from the AppCatalyst technology preview community site.

    In general, VMware is now making a concerted effort to reach out to developers in ways that are specifically designed to give them more control over data center resources, Rosoff said.

    Rosoff declined to speculate what other technologies the company may consider embedding deeper inside its core technology now that it has opened up its hypervisor more. But for IT organizations that have invested heavily in VMware management software the possibilities going forward should at the very least be intriguing.

    5:27p
    Mellanox Embraces Next-Gen Data Center Ethernet

    Looking to accelerate the shift to 25 Gigabit, 50 Gb and 100 Gb Ethernet architectures, Mellanox Technologies has unveiled a new high-end switch alongside a new line of faster network adapters.

    Gilad Shainer, vice president of marketing at Mellanox, said at this point making the shift to 25 GbE and 50 GbE is a no-brainer because the price points for older data center Ethernet technologies, 10 GbE and 40 GbE , are now essentially the same.

    As IT organizations make that switch at the adapter level, Shainer added, a migration to 100 GbE switches also becomes all but inevitable. In fact, Big Data applications that are rapidly emerging inside the data center that depend on access to large amounts of network bandwidth to ultimately succeed are increasingly forcing a network upgrade.

    “The key to being able to use data is to actually move it,” says Shainer. “You need to able to move data fast enough to support all these new applications.”

    Via a Spectrum integrated circuit developed by Mellanox, the company’s line of data center Ethernet switches can be configured to support 10 GbE, 25 GbE, 40 GbE, 50 GbE, and 100 GbE connectivity at a rate of non-blocking 6.4 Tbps full wire speed. Those switches can also be invoked via an open source API that Mellanox co-authored and contributed as the Switch Abstraction Interface specification in the Open Compute Project.

    The switches themselves now support twice as many virtual machines as the previous generation of switches, can be configured with 32 100 GbE ports, 32 40/56 GbE ports, 64 10 GbE ports, 64 25GbE ports, or 64 50GbE ports.

    Meanwhile, like other Mellanox adapters, the ConnectX-4 Lx 10/25/40/50 GbE adapter is designed to support Mellanox Multi-Host technology that enables multiple compute and storage hosts to connect to a single adapter. ConnectX-4 Lx also includes native hardware support for RDMA over Converged Ethernet (RoCE), stateless offload engines, and GPUDirect.

    The end result, Shainer said, is 2.5 times greater performance in the same adapter footprint.

    Given the fact that most IT organizations have not yet made the move to 10 GbE and 40 GbE, the chances that both these technologies will soon be orphaned inside the data center is fairly high. In fact, as demand for 25 GbE and 50 GbE technologies continues to expand, it might not be too long before it’s less expensive to use them than 10 GbE and 40 GbE technologies.

    Whatever the outcome, IT organizations that have taken their time when it comes to upgrading their data center environments may very well soon find themselves enjoying a significant second-mover advantage.

    5:58p
    Commodity Data Center Storage: Building Your Own

    It’s becoming a really interesting topic out there. I very recently had a conversation with an administrator and a friend who asked me if it’s a good idea to buy a commodity server chassis and fill it with Flash drives to create their own data center storage system. They argued that they can use a hypervisor or third-party software to manage it, create high availability, and even extend into the cloud.

    Does it make sense for everyone? What are the actual ingredients in creating your own commodity system?

    Before we dive in, there are a couple of points to understand. Data center storage has become a very hot topic for large, small, and mid-size enterprises. They’re all looking for ways to better control their arrays, disks, and now expanding cloud environments. There will certainly still be many use cases for traditional storage systems. However, new virtual layers are allowing for even greater cloud control and data abstraction.

    With that in mind, let’s look at how you can deploy your own commodity storage platform.

    The Hardware

    Depending on your use case, you might have a few different configuration considerations. In some cases you’re designing for pure IOPs. There, you’ll want to use an SSD array. In other cases, where you want a mix of more capacity and some performance, you’re probably going to want a mix of SSD and HDD. The point is that you can populate an entire set of servers with the kind of disk you require to allow it to later become your high-performance repository. Consider this:

    1. Full SSD arrays are ideal for non-write-intensive applications requiring loads of IOPs but small capacities.
    2. Hybrid or very fast HDD systems are ideal for most high-performance applications including virtualization, transaction processing, business intelligence, data warehouse, and SLA-based cloud applications.
    3. Low-cost SATA HDD drive systems are ideal for backup, write-once/read-infrequently applications, and archive-based systems.

    You can pretty much go full-on commodity and purchase a set of low-cost servers and populate it with the disk that you want. Or, there are new storage solutions that are removing a lot of the software add-ons that you might not need or want.

    Something to keep in mind: new hyper-converged virtual controller solutions now focus on absolutely pure performance, maximum capacity, and new kinds of cloud capabilities. For a number of organizations moving toward a more logically controlled data center storage platform this is very exciting. Specifically, they are looking for solutions that offer “commodity-style” storage while still offering a warranty and manufacturer support.

    What about all of those features? This brings us to the next point.

    Software-Defined Storage

    New architectures focusing on hyperscale and hyper-convergence allow you to directly abstract all storage and let you manage it at the virtual layer. These new virtual controllers can reside as a virtual machine on a number of different hypervisors. From there, it acts as an enterprise storage controller spanning your data center and the cloud.

    This kind of hyper-converged virtual storage architecture delivers pretty much all of the enterprise-grade storage features out there including dedup, caching, cloning, thin provisioning, file replication, encryption, HA, and DRBC. Furthermore, REST APIs can directly integrate with proprietary or open source cloud infrastructure management systems.

    Now, you can take your underlying commodity storage system and allow it to be completely managed by the logical layer. The cool part is that you can also point legacy storage gear to give it new life with a virtual storage controller. You’re almost ready to deploy your own commodity storage platform!

    The Workloads

    Actually, the workloads don’t matter as much as the policies you wrap around them. Here is your chance to create a truly efficient, SDS-managed platform. You can set very specific policies around high-performance workloads and around workloads that should go to cheaper, slower disk.

    Furthermore, you can instruct that very same workload to span into the cloud. This is the part of the recipe where you have to figure out your own ingredients. What kinds of workloads do you have? Big Data, VDI, app delivery, and database workloads all have very different requirements. Most importantly, you’re also positively impacting both the business process and end-user experiences. Storage economics have the capability to shift very dynamically when software-defined storage is coupled with custom, or whitebox, storage architectures.

    As you look at this list of steps, you might be asking whether it is really that easy. The reality is that it all comes down to your specific use case and business. These kinds of technologies are revolutionizing the way we control and manage data. New powerful VMs are helping us abstract storage and allow it to live in the data center, cloud, and beyond. But does it make sense for everyone to go out and do this? How nervous are folks like EMC about the future?

    Regardless, new ways to deploy storage are now making an impact across a larger number of organizations. And new capabilities around cloud are allowing data centers to create even more elastic storage architectures.

    6:07p
    Understanding DCIM and Process Ownership

    Today’s data center is supporting new use-cases designed to enhance collaboration, improve the business process, and create a more productive environment. With these new use-cases, the modern data center has become a much more complex entity. For example, DCIM tools have been around for some time, but these tools are now involved in numerous aspects of the data center control process. Data centers are now segmenting DCIM capabilities into business units and more intelligent data center management.

    With these new tools and capabilities, there are also new challenges to ensure data centers are cost-efficient, effectively managed, and empowering for the people involved. This white paper addresses these concerns by asking two very specific questions:

    • Who owns the DCIM solution?
    • How can IT and Facilities work together to ensure both the operations of a data center and the corporate objectives are met without impacting individual goals?

    These two questions are key to establishing DCIM ownership best practices. By having these best practices in place, data center personnel are able to be better aligned to a defined set of responsibilities and a plan on how to approach the DCIM setup, deployment, and usage processes. Download this white paper today to learn how to establish proper DCIM ownership for your data center.

    6:28p
    First Wholesale Data Center Suite Opens at Dallas Infomart

    Data center service provider Infomart Data Centers has opened the first wholesale suite at the major southern data center and network-carrier hub it operates.

    The Dallas Infomart is one of the most important interconnection buildings in the south, providing access to an array of carriers for connectivity within the US or into Latin America. The wholesale data center suite at the Infomart makes it possible for a service provider or (less likely) an enterprise customer to take 3 MW of capacity in the carrier hotel at once, as opposed to renting cages on a retail basis.

    “Everyone connects to the internet through our Dallas facility,” John Sheputis, president of Infomart Data Centers, said in a statement. “All of the major carriers are interconnected here. Every application hosted here will have a performance edge,” said.”

    The company traditionally specialized in retail colocation, but last year it merged with Fortune Data Centers, a wholesale Silicon Valley data center provider, and now seems to be branching out beyond its traditional business.

    A wholesale data center suite in a highly interconnected building is most appealing to a colocation, hosting, or a cloud service provider – a business that needs access to as many network carriers as possible and that can leverage physical proximity to customers as well as other service providers colocated in the building.

    One of Infomart’s biggest customers in Dallas is Equinix, the world’s biggest colocation and interconnection service provider, which has hosted infrastructure there since 2000, according to an earlier interview with an Equinix representative.

    Equinix’s most recent expansion in the building came last year: a 1.3 MW addition with capacity to support 450 cabinets. But Equinix said its DA6 data center at the Dallas Infomart could accommodate three more similar phases, so Equinix may not be the immediate candidate to take the new space.

    Dallas Infomart said it invested about $40 million in building out and optimizing data center space in the 1.6 million square foot building at 1950 Stemmons Freeway in Dallas.

    The new wholesale suite is 24,000 square feet. It sits adjacent to a carrier-neutral building meet-me room, where tenants interconnect their networks.

    << Previous Day 2015/06/23
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org