You get the most out of containers if you run parts of your application independently of others.
This approach has numerous benefits, as follows:
- You can release your application more often as you can now change a part of your application without this impacting something else; your deployments will also take less time to run.
- Your application parts can scale independently of each other. For example, if you have a shopping app and your orders module is jam-packed, it can scale more than the reviews module, which may be far less busy. With a monolith, your entire application would scale with traffic, and this would not be the most optimized approach from a resource consumption point of view.
- Something that impacts one part of the application does not compromise your entire system. For example, customers can still add items to their cart and check out orders if the reviews module is down.
However, you should also not break your application into tiny components. This will result in considerable management overhead as you will not be able to distinguish between what is what. In terms of the shopping website example, it is OK to have an order container, a reviews container, a shopping cart container, and a catalog container. However, it is not OK to have create order, delete order, and update order containers. That would be overkill. Breaking your application into logical components that fit your business is the right way.
But should you break your application into smaller parts as the very first step? Well, it depends. Most people will want to get a return on investment (ROI) out of their containerization work. Suppose you do a lift and shift from virtual machines to containers, even though you are dealing with very few variables, and you can go into containers quickly. In that case, you don’t get any benefits out of it – especially if your application is a massive monolith. Instead, you would add some application overhead because of the container layer. So, rearchitecting your application to fit in the container landscape is the key to going ahead.
Are we there yet?
So, you might be wondering, are we there yet? Not really! Virtual machines are to stay for a very long time. They have a good reason to exist, and while containers solve most problems, not everything can be containerized. Many legacy systems are running on virtual machines that cannot be migrated to containers.
With the advent of the cloud, virtualized infrastructure forms its base, and virtual machines are at its core. Most containers run on virtual machines within the cloud, and though you might be running containers in a cluster of nodes, these nodes would still be virtual machines.
However, the best thing about the container era is that it sees virtual machines as part of a standard setup. You install a container runtime on your virtual machines and do not need to distinguish between them. You can run your applications within containers on any virtual machine you wish. With a container orchestrator such as Kubernetes, you also benefit from the orchestrator deciding where to run the containers while considering various factors – resource availability is among the most critical.
This book will look at various aspects of modern DevOps practices, including managing cloud-based infrastructure, virtual machines, and containers. While we will mainly cover containers, we will also look at config management with equal importance using Ansible and learn how to spin up infrastructure with Terraform.
We will also look into modern CI/CD practices and learn how to deliver an application into production efficiently and error-free. For this, we will cover tools such asJenkins and Argo CD. Thisbook will give you everything you need to undertake a modern DevOps engineer role in the cloud and container era.
Summary
In this chapter, we understood modern DevOps, the cloud, and modern cloud-native applications. We then looked at how the software industry is quickly moving toward containers and how, with the cloud, it is becoming more critical for a modern DevOps engineer to have the required skills to deal with both. Then, we took a peek at the container architecture and discussed some high-level steps in moving from a virtual machine-based architecture to a containerized one.
In the next chapter, we will look at source code management with Git, which will form the base of everything we will do in the rest of this book.