We are pleased to share with you all an interesting article contributed by Nikhil Vyakaranam and Dilip Krishna S.
Nikhil Vyakaranam Technical Marketing Engineer at Cisco Systems
|
|
Dilip Krishna S Technical Marketing Engineer at Cisco Systems
|
|
The Problem
How often have we seen ourselves running into issues when installing an application on host operating system? The application could be conflicting with other applications like they probably want the same port to operate or you probably want to try out installing multiple versions of the same software or it could be that you do not want some applications to read certain configuration files. Think of a scenario where you need to move or redeploy the same application elsewhere, it is very unlikely that you will pick up all the dependencies from the host machine and move it to another host.
The Argument
We have a solution to all the above mentioned problems by using Virtualization or Virtual Machines. You can just spin up a VM for running a specific application which will include its own kernel, file system, networks and so on. If you want to try a new version of the same application, just spin up another VM and it will have no visibility to the other VMs or to the host OS and could use the same network and port as the other . If you decide to move your application to another server, you could use VM migration tools or even convert the VM into a template and carry it in a pen drive and deploy it elsewhere.
Flexible isn’t it? But there are some disadvantages too. It is about the resources it consumes. If you are planning to deploy a single application on a VM and say the application needs very less resources, you still need a certain amount of compute resources on the VM for the Guest OS and the Hypervisor to run. You can only imagine the extent to which such resources are being wasted in a large datacenter.
Another talking point is the time that Virtual Machines takes to startup and shutdown, probably in the order of few minutes.
What are Containers?
A definition says “Containers are an abstraction at the app layer that packages code and dependencies together”. It practically means just the Application and its dependent binaries and libraries are packaged into a container with no extra baggage of an operating system.
How does it work then?
Containers utilize the host operating system kernel and runs in an isolated user space. Multiple containers can be run on a host as it shares the Host OS kernel but runs in isolated user spaces with no visibility to each other. If you look at it from the container side, it will have a filesystem of its own and cannot see the host file system. Containers will also have its own process table different from the Host OS process table. Remember hardware virtualization is what enables Virtual Machines , on the contrary containerization is all about Operating System Virtualization. They are lightweight as they do not depend on an additional layer like hypervisor.
Containers use a layer of software called container engine on top of the OS. An example of container engine is Docker. They have a significant lesser overhead than VM. This is because of the sharing of the kernel with the host OS which means containers can start and stop extremely fast. Usually the startup time is the time that container process takes to start.
A typical path to production deployment involves the software to go through development environment, Test environment and finally into the Live or Production state. Each of these stages will involve installation and configuration of staging environment including all the complex dependencies which is like thrice the effort. In the world of containers, if you intend to move your application say from your test environment to production, just build the image and use the same image in the production environment. Upgrading the application software is not same anymore. The traditional methods includes upgrading your virtual machines right from the dependencies for the application and the application itself. With containers, we come across “Immutable Infrastructures” meaning, there is no upgrade procedures any more, just delete your present containers and create new ones. The new containers can be spun up in a matter of seconds. There are other CI/CD tools which can aid this in production environment from a different angle. We plan to cover this topic in detail in the upcoming posts.
All the discussions above can be summed up into the below diagrams. Figure-1 shows how multiple layers are involved between the application and the host operating system. Figure-2 shows how application and dependencies are packaged and the containers running directly on the host OS.
However Figure-2 gives us an impression that the container engine is in the execution path between the application and the host OS which is not the case. Figure-3 removes this ambiguity with the container engine shown as daemon running on the host OS. The daemon generally interacts with the containers and the container images.
Figure-3
Some other examples of container technologies include LXC, OpenVZ, Linux VServer, BSD Jails, and Solaris zones.
For more articles by Nikhil Vyakaranam and Dilip Krishna S on Technically Speaking, please see: https://cloudifynetwork.com |