If you’re thinking about using containers to manage an application, there are a lot of options for technologies to use. It can be difficult to even know where to begin to make a decision. One common question is whether someone should use Docker vs Kubernetes for managing their application containers. This is a misleading question. In truth, Docker and Kubernetes aren’t competing technologies. There’s no need for them to face off.
Instead, they’re technologies that can work together to make managing your application easier. In this article, we’ll dig into what Docker is, what Kubernetes is, and how they can work together.
Docker is a technology first released in 2013 that runs applications inside virtual containers on a computer. Those containers have everything the application needs in order to run already-stored data on them. These containers are easily ported to other computers through Docker’s use of images, which are saved states of a container. An image of a Docker container will run the same on any computer system you install it on, with no other configuration required.
Docker is useful during the development, testing, and deployment phases of a software project. Because the Docker container works the same in every environment, it simplifies the deployment of a software application into a broader environment.
As noted, Docker is good at wrapping up all the packages and configuration you need to in order to run a software application into a portable container image. That image is easily shared with other people on your local network or across the internet using Docker Hub.
If you’re building a complicated software application, you can use Docker containers in combination to build more complicated systems. For instance, a complicated application might need a database, message queue, and application server to all run together. Docker allows you to configure and run all three of those applications side-by-side.
What’s more, you can separate the requirements for each type of server into their own container. Does your database server need a different version of a library than your message queue? Docker means you never have to worry about those two libraries conflicting.
Docker also makes it really easy to experiment with new software libraries. Because each container is built from an unchanging image, if you make a mistake, it’s trivial to revert your container back to the saved image. No matter how badly you mess up, fixing your container is just a moment’s work.
Containers can also have positive security benefits too. If your application server and database server are running in different containers, a compromised application server can’t access the database server’s memory. Docker has a pretty wide variety of strengths. That’s why it’s been growing in popularity for the last few years and shows no signs of stopping.
Like all technologies, there are downsides to using Docker. For starters, Docker containers run in virtualized environments. All virtual environments come with a performance penalty. Docker simply isn’t as fast as running an application directly on a dedicated server. Most of the time, this isokay. Most applications don’t notice the difference in performance between virtual and dedicated environments.
But if you’re working in an environment where latency and speed are of the utmost importance, you’ll want to avoid Docker. Another downside to Docker is that if the application you need has a graphical user interface, you’re probably out of luck. While it’s possible to run a GUI on Docker, it’s pretty convoluted and you’re going to have a tough time.
You can work around or mitigate some of these issues. For instance, it’s possible to tune Docker to improve its performance if you can spend the time. If you’re concerned with Docker’s overall performance in production, Stackify’s Retrace is a powerful tool to help you identify the bottlenecks of your application. Unfortunately, you can’t work around some of Docker’s issues.
If your application requires two services to be running on the same machine at the same time, containers aren’t going to be a good fit. Understanding the weaknesses of Docker and how they relate to your use case is key to maximizing the return on your work to set up Docker.
As we’ve noted, Docker is a powerful tool. It makes it easy to pass unchanging container images between computers, which simplifies development, testing, and deployment of new applications. Even if you have an application that isn’t a perfect fit for containers, the consistent deployment pattern might be worth the time you need to set it up.
If your application is a good fit for containers though, Docker pays back your effort abundantly; if you’re setting up applications that can run by themselves, don’t require a graphical interface, and need to be consistently deployed, Docker is a great fit. Database servers are a great example of this kind of application; it’s no surprise that database servers are some of the most commonly downloaded images on Docker Hub.
It’s rare to use Docker for just one part of an application. Most of the time, once a team has one part of their application using Docker, they’ll switch to using Docker to manage all parts of their application together. It won’t just be the database server, but also the application server, message queue, and load balancer.
When you start to put all of those pieces together, Docker can become pretty complicated. This becomes even truer if you need to run more than one instance of a particular part of your application. This is where Kubernetes comes in.
Kubernetes is an open-source project for managing complicated containerized applications. Originally developed by Google, it was first released in 2015. It’s now managed by an open source software foundation (the Cloud Native Computing Foundation).
Whereas Docker controls the container for one or a few parts of a single application, Kubernetes controls dozens of containers together. Unlike Docker, Kubernetes isn’t a tool for managing containers during your development or testing process. It’s more for making sure that your production containers are all running well when they’re in production.
Kubernetes is used to make sure that your production application is running the way that it should be. This doesn’t just mean making sure that all of your containers are up and running, though it can help with that. Kubernetes can make your application self-healing. This means that it’ll detect when an existing container has gone into an unresponsive state, and it’ll start a new container to replace it.
Kubernetes can also help to scale your application horizontally. It might do this by starting up new containers that handle CPU-intensive parts of your application when you’re under a heavy load. It’s also useful for simplifying your deployments and minimizing downtime. You can configure Kubernetes to keep the existing version of your application running when you go to deploy a new version. It’ll make sure that the new version is correctly started and accepting connections before it turns off the old version of your application. Your users never even notice the downtime.
Kubernetes is a powerful tool, and it’s able to do a lot of things. This can make the learning curve really steep. When you’re getting started with Kubernetes, it might feel like a tool with unlimited possibilities. As you’re starting out, you should make sure to seek out resources that will help you learn.
You’ll eventually learn that it’s not a totally unlimited tool, but it’s able to do a lot of things. People who use Kubernetes to manage their cloud infrastructure describe massive reductions in the time it takes to deploy their application. Their applications run with fewer interruptions, and their customers appreciate the increased responsiveness due to automatic scaling.
Just like Docker, Kubernetes isn’t a silver bullet. For starters, Kubernetes is very complicated. Like we previously noted, it can do a lot of things very well. This is a double-edged sword. It’s possible to do a lot of different things, but it’ll take you a long time to learn to do them. You’ll learn a lot of those steps through trial and error. This can lead to a sense of frustration with your progress as you try to figure out the next steps for deploying your application the way that you want to.
Complexity isn’t the only problem that you’ll run into, though. Securing a Kubernetes installation can be a difficult prospect. A common challenge is that Kubernetes containers are configured with a shared token that allows Kubernetes to modify the container while it’s running. If an attacker were to gain access to this token, they’d have access to all the containers on your system. That’s a big problem.
Moreover, there are other common issues that Kubernetes users report. Setting up shared networking and storage resources, for instance, can be a struggle. Because containers aren’t designed to persist data, that means you need somewhere for your containers to store data. Sharing that data effectively between a cluster of containers can be a big challenge.
Astute readers will notice that this isn’t necessarily a list of things Kubernetes isn’t good at. Instead, it’s a list of challenges that Kubernetes presents. An effective summary is that the biggest weakness of Kubernetes is that it’s just not very easy to use.
Many cloud service providers support systems similar to Kubernetes on their own infrastructure. Amazon Cloud Formation, for instance, manages clusters of servers on AWS, and it’s a lot easier to use than Kubernetes. So why use Kubernetes if there are simpler tools out there? One solid argument is that Kubernetes is vendor-agnostic.
Kubernetes doesn’t care if you’re deploying to Amazon or Google or Azure or onto servers located in your basement. It works just the same on every system. If you have some servers running on machines physically located inside your building, and others running on a cloud provider, Kubernetes can easily manage both.
If your organization isn’t committed to a single cloud provider or any cloud provider, Kubernetes is a great tool for managing your container clusters. Anytime you need vendor-agnostic simple deployments, self-healing environments, and auto-scaling, Kubernetes is a great tool for the job.
So we’ve come back around to understanding that the right way to think about these technologies isn’t “Kubernetes vs Docker” but rather “Kubernetes and Docker.” Kubernetes and Docker work together to orchestrate a software application. Docker’s containers serve as the individual instruments, each providing a single piece of the whole. Kubernetes is the conductor of the orchestra, making sure that all the pieces are in tune and playing on the right key.
When you’re working with Kubernetes, you’re able to automate a lot of the work of maintaining and deploying complicated applications. Mature software teams will build their Kubernetes work straight into their continuous integration pipelines, which means that new software features are automatically deployed and scaled almost as soon as they’re written. Obviously, this is terrific for customers of your software, but it’s also great for your teams. Kubernetes and Docker working together means that developers are seeing their work get out into the world a lot faster than they used to.
We’ve noted that one of the downsides of Kubernetes is that it’s difficult to set up and maintain because it’s so powerful. This remains true, but there’s good news on that horizon too.
Some intrepid companies are starting to build out Kubernetes as a Service offerings, which simplifies the effort of getting Kubernetes running and stable. Much like AWS EC2 simplifies the effort of getting a server up and running, KaaS simplifies the effort of getting dozens of servers up and running. This easily eliminates the biggest weakness of Kubernetes.
So now that you know that it’s “Kubernetes and Docker,” you’re probably still wondering what some alternatives are. You don’t necessarily want to jump straight into the first container orchestration system you come across.
The good news is that container orchestration is a budding industry, and there are a number of terrific alternatives to Kubernetes. Probably the most mature is Docker Swarm, which is built directly by the same organization that builds Docker itself. As a bonus, Docker Swarm is much simpler to configure than Kubernetes, eliminating one of the big negatives. Apache Marathon has the backing of the Apache Software Foundation and is open source. There are also good options in Nomad and Kontena, two more entrants into the field.
There’s also the option to pursue Kubernetes as a Service like we previously mentioned.
Hopefully, this article has helped outline how Kubernetes and Docker are different, and how they work together. If you’re just getting into containerization, maybe the best next step is to start learning more about Docker and trying it out for yourself.
If you’re here to think about how to take containerization to the next level in your organization, reading more about Kubernetes and some of its competitors is a great next step. Maybe you’re finding Kubernetes is a difficult concept to master, and the right answer is for your company to move to KaaS.
Whatever your next step, now is an exciting time for technologists. It’s easier than ever to build and deploy novel applications that you can scale to millions of users with a few buttons. That was unthinkable 20 years ago! Today, it’s easy. As always, the best next step is just to get to building something.
If you would like to be a guest contributor to the Stackify blog please reach out to [email protected]