Stackify is now BMC. Read theBlog

How to Restart a Kubernetes Pod Using kubectl

By: Rajesh Tilwani
  |  May 15, 2024
How to Restart a Kubernetes Pod Using kubectl

Restarting a Kubernetes pod can be necessary to troubleshoot issues, apply configuration changes or simply ensure the pod starts fresh with a clean state. With the power of kubectl, you’ll be able to gracefully restart pods without disrupting the overall application availability.

This post will walk you through the process of restarting pods within a Kubernetes cluster using the command-line tool, kubectl.

Let’s dive in and learn how to effectively restart Kubernetes pods using kubectl! But first …

What is a Pod in Kubernetes?

In Kubernetes, a pod is the smallest and most basic deployment unit. A pod represents a single instance of a running process within a cluster. A K8s pod encapsulates one or more containers, storage resources and network settings that are tightly coupled and need to be scheduled and managed together.

Think of it like a container with 1 or more beer bottles, and the bartender, in this case, is Kubernetes.

Containers within a pod share many things like the same network namespace, allowing them to communicate with each other over localhost. They can also share the same storage volumes, making sharing data and files between containers in the same pod easier.

Like many things, K8s pods also have a lifecycle. Let’s take a look.

Five Stages in the Lifecycle of a Kubernetes Pod

  1. Pending: In this stage, the pod has been created but is waiting for the necessary resources (such as CPU, memory, or storage) to be allocated to it. The pod remains in the pending state until all required resources are available on the cluster.
  1. Running: Once the necessary resources are allocated, the pod enters the running state. At this stage, the containers within the pod are started, and the pod is actively running on a node within the cluster. The containers are executing the specified application or workload.
  1. Succeeded: If the containers within a pod successfully complete their tasks and terminate without any errors, the pod transitions to the succeeded state. This is typically applicable to batch jobs or one-time tasks that have been completed successfully and don’t require continuous operation.
  1. Failed: If any container within a pod terminates with an error, the pod moves to the failed state. This could occur due to an application error, resource limitation or other issues. When a pod fails, it will not automatically restart, and it requires manual intervention to diagnose and fix the underlying problem.
  1. Unknown: The state of the pod could not, for some reason, be ascertained. Usually, a communication error with the node where the pod should be operating results in this stage.

Note: Some kubectl commands display a pod as terminating while it is being destroyed. One of the pod phases is not this Terminating state. A pod is given a term to end gracefully, which is 30 seconds by default.

Also Read-https://stackify.com/12-reasons-for-using-kotlin-for-android-app-development/

Why Restart a Pod in Kubernetes?

A pod should continue to function until it is replaced by a new deployment. As a result, a pod cannot be restarted; instead, it must be replaced.

There are a few alternative ways to accomplish a pod “restart” with kubectl. While there is no kubectl restart [podname] command for usage with K8S, you can use Docker:

 docker restart [container_id]

We shall discuss those methods too, if you are in a hurry, scroll down to the commands section.

There are several reasons why you may need to restart Kubernetes pods:

Configuration Changes

When you make changes to the configuration of your application or the environment it runs in, restarting pods can be necessary to apply those changes. This includes updates to environment variables, resource limits, volume mounts or any other configuration parameter that requires a pod restart to take effect.

Application Updates

When you deploy a new version of your application, restarting pods is often required to ensure that the updated code or container image is running.

Troubleshooting

If your application is experiencing issues or behaving unexpectedly, restarting pods can be a troubleshooting step to resolve the problem.

Resource Constraints

In some cases, pods may need to be restarted to address resource constraints. For example, if a pod is running out of memory or experiencing high CPU usage, restarting it can help reclaim the resources and restore normal operation.

Networking or Service Discovery Changes

When making changes to networking configurations or service discovery mechanisms, restarting pods can be necessary to ensure that the pods are properly connected to the updated network.

State Cleanup

In situations where a pod’s internal state becomes corrupted or inconsistent, restarting the pod can help reset the state and start fresh.

Performance Optimization

Restarting pods periodically can be a part of performance optimization strategies

What are namespaces in Kubernetes, and How to Use kubectl to Manage Them

In layman’s terms, Namespaces in Kubernetes can be described as the “Harry Potter Sorting Hats” of the cluster world. Just like Hogwarts houses students based on their traits, Kubernetes namespaces sort and separate workloads based on their teams or projects.

It’s like having magical boundaries around your resources, making sure they play nicely with others, and preventing any sneaky spells from causing chaos in the cluster.

In technical terms, Kubernetes uses namespaces as a way to create virtual clusters within a physical Kubernetes cluster. Namespaces provide a way to divide resources and isolate them from each other, enabling multiple teams or projects to run their workloads independently and securely within the same cluster.

Each namespace acts as a logical boundary, organizing and separating 

resources such as pods, services, deployments and more.

Here’s an example to illustrate the usage of namespaces in Kubernetes.

Team A and Team B

Suppose you have a Kubernetes cluster with multiple teams, Team A and Team B, who need to deploy their applications. Without namespaces, there would be a risk of naming conflicts (conflicts are risky in prod) and resource collisions between the two teams. However, by leveraging namespaces, you can create separate environments for each team within the same cluster.

  1. Creating Namespaces: First, you would create two namespaces, one for Team A and another for Team B. For example:
kubectl create namespace team-a
kubectl create namespace team-b
  1. Deploying Resources: Next, each team can deploy their resources within their respective namespaces. For example, Team A can deploy their application in the “team-a” namespace, and Team B can do the same in the “team-b” namespace. This ensures that the resources are isolated and only accessible within their designated namespaces. To deploy a resource within a specific namespace, you can use the –namespace flag with kubectl. For example:
kubectl create deployment my-app --image=my-image --namespace=team-a
  1. Accessing Resources: To access the resources within a namespace, you need to specify the namespace when interacting with the cluster. For example, to view the pods in Team A’s namespace:
kubectl get pods --namespace=team-a
  1. Namespace Scopes: By default, when a resource is created without specifying a namespace, it is placed in the “default” namespace. However, you can set a default namespace for a particular context using the kubectl config command. This allows you to omit the –namespace flag when working within that context.
  1. Resource Sharing: While namespaces provide isolation, they also allow for controlled resource sharing between namespaces. You can define resource quotas and limits at the namespace level to ensure fair resource allocation and prevent a single team from monopolizing cluster resources.

By utilizing namespaces effectively, Kubernetes enables teams to work concurrently and independently within a shared cluster environment, maintaining separation and ensuring resource efficiency.

Now let’s review kubectl commands for restarting Kubernetes pods.

5 Methods of Restarting Pods Using kubectl

Now that we understand why we need to restart Kubernetes pods, let’s explore all the ways in which we can.

There are a few methods to restart Kubernetes pods using the kubectl command-line tool:

Rolling Restart Using Deployments or StatefulSets

If your pods are managed by a Deployment or StatefulSet, you can perform a rolling restart to gracefully restart the pods one by one without impacting the availability of your application. To trigger a rolling restart, you can use the kubectl rollout restart command followed by the name of the Deployment or StatefulSet. For example:

kubectl rollout restart deployment/my-app

This command initiates the rolling restart process, creating new pods with the updated configuration or image, and terminating the old pods gradually. This method can be used as of K8S v1.15.

Delete and Recreate

If you don’t have a Deployment or StatefulSet managing your pods, you can delete the existing pods and let Kubernetes recreate them with the same configuration.

First, you can list the pods to identify the ones you want to restart:

kubectl get pods

Then, delete the pods using the kubectl delete pod command followed by the pod name. For example:

kubectl delete pod my-app-pod-123

Kubernetes will automatically create new pods to replace the deleted ones, ensuring your application remains running. You can use the following command to delete the ReplicaSet:

kubectl delete replicaset <name> -n <namespace>

Scale Down and Scale Up

Another method is to scale down the number of replicas for your pod deployment to zero and then scale it back up. This effectively terminates the existing pods and creates new ones.

To scale down the number of replicas, use the kubectl scale command with the desired number of replicas set to zero. For example:

kubectl scale deployment/my-app --replicas=0

After scaling down, you can scale up the deployment again to the desired number of replicas, triggering the creation of new pods:

kubectl scale deployment/my-app --replicas=3

You can add ‘-n <namespace>’ flag with the above command.

Scaling a kubernetes cluster can be done via different scaling solutions like AWS Karpenter and Kubernetes Cluster Autoscaler.

Environment variable change

The pod will restart to implement the modification if an environment variable linked to it is set or altered. The example below restarts the pod by setting the environment variable DEPLOY_DATE to the chosen date.

kubectl set env deployment -n "$(date)" DEPLOY_DATE="$(namespace)"

Using Restart Policy

Using a restart policy, Kubernetes may be set up to automatically restart pods that fail. The Always restart policy, which restarts the pod whenever it fails or is deleted, is the most popular restart setting. 

How to set a restart policy is as follows:

  1. Edit the pod definition YAML file and add a restartPolicy field with the value Always.
apiVersion: v1
kind: Pod
metadata:
 name: my-pod
spec:
 restartPolicy: Always
 containers:
 - name: my-container
   image: my-image
  1. Apply the updated pod definition using the kubectl apply command.
kubectl apply -f my-pod.yaml
pod/my-pod created

Whenever the pod crashes or is deleted, Kubernetes will automatically create a new pod to replace it.

This wraps up all the common methods to restart K8s pods.

Final thoughts

Remember that when you restart pods, there might be a temporary disruption in your application’s availability. It’s important to consider any dependencies or impacts on your workload before initiating a restart.

It’s worth noting that Kubernetes provides mechanisms like rolling updates and lifecycle hooks to minimize the need for manual pod restarts in certain scenarios. However, there are cases where a manual restart is still necessary to ensure the desired state or troubleshoot problems effectively.

However, if you need to restart a single pod, you can use the kubectl delete command or scale down the deployment and then scale it up again using the kubectl scale command.

Happy kubeclt-ing. 

Improve Your Code with Retrace APM

Stackify's APM tools are used by thousands of .NET, Java, PHP, Node.js, Python, & Ruby developers all over the world.
Explore Retrace's product features to learn more.

Learn More

Want to contribute to the Stackify blog?

If you would like to be a guest contributor to the Stackify blog please reach out to [email protected]