如何重启pod

A pod can also have one or more containers, one of which is the application container, and the others are the init container, which halts after it completes a job or the application container is ready to perform its function, and the sidecar container, which is affixed to the primary application container. A container or pod will not always leave due to an application failure. In scenarios like this, you will need to restart your Kubernetes Pod explicitly. In this guide, you will explore how to force pods in a deployment to restart using several ways.

Pre-requisites

To restart the pod using kubectl, make sure you have installed the kubectl tool along with the minikube cluster. Otherwise, you will not be able to implement the prescribed article.

Methods to create pods using Kubectl

To restart pods using Kubectl, you have to first run the minikube cluster by using the following appended command in the terminal.
we can create pods like this:

kubectl  run nginx --image nginx --port=80
# 也可以使用yaml文件输出,通过下面的命令会输出yaml文件内容
kubectl  run nginx --image nginx --port=80 --dry-run=client -o yaml

we also can user deployment deploy pods, like this:

kubectl create deployment myweb --image=nginx --replicas=1 --port=80
# 也可以使用yaml文件部署,yaml文件内容如下:
[root@ecs-82f5]~# kubectl create deployment myweb --image=nginx --replicas=1 --port=80 --dry-run=client -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: myweb
  name: myweb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myweb
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: myweb
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 80
        resources: {}
status: {}

Method 1:

A rolling restart will be used to restart each pod in order from deployment. This is the most recommended strategy because it will not cause a service break. Write the below-affixed command in the terminal.kubectl rollout restart deployment <deployment name>

[usera@ecs-82f5 ~]kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
myweb-55565b5c87-pcb5c   1/1     Running   0          26s
[usera@ecs-82f5 ~] kubectl rollout restart deployment myweb
deployment.apps/myweb restarted
[usera@ecs-82f5 ~]kubectl get pods
NAME                     READY   STATUS              RESTARTS   AGE
myweb-55565b5c87-pcb5c   1/1     Running             0          3m5s
myweb-c849b688b-grvbn    0/1     ContainerCreating   0          2s
[usera@ecs-82f5 ~] kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
myweb-c849b688b-grvbn   1/1     Running   0          8s

The command mentioned above will restart it. Your app will be accessible since most of the containers will be functioning. we can found the pod name changed.

Method 2:

The second method is to compel pods to restart and synchronize with the modifications you made by setting or changing an environment variable.kubectl set env deployment <deployment name> DEPLOY_DATE="$(date)"

[usera@ecs-82f5 ~]kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
myweb-c849b688b-grvbn   1/1     Running   0          3h10m
[usera@ecs-82f5 ~] kubectl set env deployment myweb DEPLOY_DATE="(date)"
deployment.apps/myweb env updated
[usera@ecs-82f5 ~] kubectl get pods
NAME                     READY   STATUS              RESTARTS   AGE
myweb-67cd6f8c99-jrrq9   0/1     ContainerCreating   0          3s
myweb-c849b688b-grvbn    1/1     Running             0          3h10m
[usera@ecs-82f5 ~]$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
myweb-67cd6f8c99-jrrq9   1/1     Running   0          6s

Method 3:

Reducing the number of deployment copies to zero and scaling back up to the appropriate state is another method for restarting Pods. This compels all current pods to cease and terminate, followed by the scheduling of fresh pods in their place. Limiting the number of copies to 0 will result in an outage. Hence a rolling restart is advised. Use the following appended command to set a deployment’s replicas to 0.

kubectl scale deployment  --replicas=0
kubectl scale deployment  --replicas=1

The command scale specifies the number of replicas that should be active for each pod. It effectively shuts down the process when users set it to zero. To start the said pod again, we are going to set its replica value more than 0.

[usera@ecs-82f5 ~]kubectl scale deployment myweb --replicas=0
deployment.apps/myweb scaled
[usera@ecs-82f5 ~] kubectl get pods
No resources found in default namespace.
[usera@ecs-82f5 ~]kubectl scale deployment myweb --replicas=1
deployment.apps/myweb scaled
[usera@ecs-82f5 ~] kubectl get pods
NAME                     READY   STATUS              RESTARTS   AGE
myweb-67cd6f8c99-zbczp   0/1     ContainerCreating   0          2s
[usera@ecs-82f5 ~]$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
myweb-67cd6f8c99-zbczp   1/1     Running   0          4s

Conclusion

Kubernetes is an effective container orchestration platform. However, difficulties do arise, as they do with all systems. So, restarting your pod will not resolve the fundamental issue that caused it to fail in the first place, so be sure to identify and resolve the root cause.

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注