Skip to content

how to delete orphaned pod in Kubernetes

Problem description

The office building was powered off last Friday. when I came back on Monday, I found that many pods were in the same state Terminating. after investigation, it was because some nodes in the Kubernetes cluster in the testing environment were PCs, and they needed to be manually turned on after power outage. after power on, the node returns to normal, but the pods were stuck in terminating state.

keep getting the following errors by using journalctl -fu kubelet

[root@k8s-node4 pods]# journalctl -fu kubelet
-- Logs begin at 二 2019-05-21 08:52:08 CST. --
May 21 14:48:48 k8s-node4 kubelet[2493]: E0521 14:48:48.748460    2493 kubelet_volumes.go:140] Orphaned pod "d29f26dc-77bb-11e9-971b-0050568417a2" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.

enter the directory /var/lib/kubelet/pods to view

[root@k8s-node4 pods]# ls
36e224e2-7b73-11e9-99bc-0050568417a2  42e8cd65-76b1-11e9-971b-0050568417a2  42eaca2d-76b1-11e9-971b-0050568417a2
36e30462-7b73-11e9-99bc-0050568417a2  42e94e29-76b1-11e9-971b-0050568417a2  d29f26dc-77bb-11e9-971b-0050568417a2

As you can see, the id of the pod is listed here as a folder, cd into it (d29f26dc-77bb-11e9-971b-0050568417a2) and you can see that there are the following files in it.

[root@k8s-node4 d29f26dc-77bb-11e9-971b-0050568417a2]# ls
containers  etc-hosts  plugins  volumes

an item sagent-b4dd8b5b9-zq649 in the etc file.

[root@k8s-node4 d29f26dc-77bb-11e9-971b-0050568417a2]# cat etc-hosts
# Kubernetes-managed hosts file.       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters      sagent-b4dd8b5b9-zq649

Run command kubectl get pod|grep sagent-b4dd8b5b9-zq649 on the master, the pod no longer exists.


The current solution is to first enter the `/var/lib/kubelet/pods` directory on the problem node, delete the pod’s folder corresponding to the pod id that reported in the error log, and then restart kubelet and docker.