|
| |||
|
|
Tech Primer: Clarity on Containers Kong Yang is Head Geek at SolarWinds. Container ecosystems from the likes of Google, Docker, CoreOS, and Joyent are easily one of the more intriguing IT innovations in the enterprise and cloud computing space today. In the past year, organizations across all major industries, from finance to e-commerce, took notice of containers as a cost efficient, portable, and convenient means to build an application. The prospects gave organizations an exciting new model to compare and contrast to virtualization. But for all the hype, many organizations and IT professionals still struggle to understand both the technology itself and how to take advantage of its unique benefits—especially as Docker, the market leader, begins to expand the use cases for containers into the stateful architectural landscape that is common with enterprise applications. This technology primer aims to provide clarity on containers and arm you with everything you need to know to successfully leverage this technology. Containers 101: Back to BasicsTo start, one of the main misconceptions about containers is that they are part and parcel replacements for virtual machines (VMs). Despite some early enterprise adopters implementing them as such, that is not the case. In a nutshell, a container consists of an entire runtime environment—an application, its dependencies, libraries and other binaries, and configuration files needed to run it—bundled into one package designed for lightweight, short-term use. When implemented correctly, containers enable much more agile and portable software development environments. Containers simply abstract away the need for traditional servers and operating systems. Virtualization, on the other hand, includes a hypervisor layer (whether it be Microsoft Hyper-V or VMware vSphere) that segregates virtual machines and their individual operating systems. Virtualization abstracts resources of the underlying hardware infrastructure, consisting of servers and storage so that VMs can use these pools of resources. VMs can take considerably longer than containers to prep, provision, and deploy, and VMs tend to stay in commission much longer than containers. As a result, VMs tend to have a much longer application lifecycles. Therefore, a key difference is that the container model is not intended to be a long-term environment like VMs; rather, they are designed to (ideally) be paired with microservices in order to do one thing very well and move on. With this in mind, let’s discuss some of their benefits. First, as mentioned, containers spin up much more quickly and use less memory, ultimately leaving a smaller footprint on data center resources than traditional virtualization. This is important, as it enables process efficiency for the development team, which in turn leads to much shorter development and quality assurance testing cycles. With containers, a developer could write and quickly test code in two parallel container environments to understand how each performs and decide on the best code fork to take. Docker builds an image automatically by reading the specific set of instructions stored in the Dockerfile, a text file that contains all the commands need to build a given image. This means that containers should be ephemeral, meaning they can be stopped, changed, and newly built with minimal set up and configuration. Containers can also support greater collaboration between multiple team members who are all contributing to a project. Version control and consistency of applications can be problematic with multiple team members working in their own virtual environments. Think about all the different combinations of environment configurations. Containers, on the other hand, drive consistency in the deployment of an image—combining this with a hub like GitHub allows for quick packaging and deployment of consistently known good images. The ability to quickly spin up mirror images of an application will allow various members of the same development team to test and rework lines of code in-flight, within disparate but consistent image environments that can ultimately synchronize and integrate more seamlessly. Interestingly, Docker has begun evolving container technology to go beyond the typical test-dev model, and both deliver additional business value and reduce the barrier of consumption for enterprises. Docker engine runs on major desktop platforms like Windows, Linux, and Macintosh operating systems, which allows organizations to get experience with containers while demoing and testing a few use cases on laptops. Additionally, Docker’s recent acquisition of Infinit, a distributed storage vendor, emphasizes the company’s intention to expand and support enterprise needs. The integration of Infinit’s technology will allow developers to deploy stateful web architecture and legacy enterprise applications. This combination of technologies aims to influence organizations saddled with technology debt and legacy applications to adopt containers. Virtualization or Containers: Which is Right for You?So, how do you decide when to leverage containers? It starts with a fundamental understanding of your application architecture and its lifecycle—from development to production to retirement. Establishing this baseline will help you decide whether said applications are ideal for container or better left as VMs. An e-commerce site, for instance, might decide to transition from using several VMs that are executing on multiple functions to a container-based model where the tiered “monolithic” application is broken down into several services distributed across public cloud or internal infrastructure. One container image would then be responsible for the application client, another container image for the web services, and so forth. These containers can be shipped to an unlimited number of host machines with identical configuration settings, so you can scale and drive consistency across your e-commerce site. However, even though some applications in your environment might be prime candidates to shift to containers, the cost to evolve people, processes, and technology remains a large obstacle for most organizations that currently support virtualization. Heavy investments in vSphere, Hyper-V, or KVM virtualization solutions, not to mention the accumulated technical expertise and process to support it, is a key reason why businesses today are struggling to adopt container technology. Despite this, organizations should look for opportunities to gain experience with container technology. As demonstrated by Docker’s expansion into more enterprise capabilities, containers can certainly begin to play a larger role in the modern data center, where web scale and mobile rule. The following best practices will help businesses better prepare themselves to work with and manage containers: Getting There: Best Practices for Working with Containers
ConclusionIn the year ahead, I expect IT departments will finally come to a greater understanding of container technology and how it can realistically and appropriately be used for IT operations alongside virtual infrastructure. Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton. Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. |
|||||||||||||