24sama
经过一番清理 scp命令貌似通过了 配置基本都推送到了member节点
但是kubelet 仍然起不动 一个有意思的情况是 列出缓存里有的pause镜像版本是3.6 但是我日志里pause仍然tag版本为3.5不知道有没有影响
使用多master想配置homelab成HA真的难 我几乎想手动kubeadm init了
`sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:56 +08 stdout: [h170i]
/root/.bashrc:行22: /usr/local/bin/kubectl: 权限不够
/root/.bashrc:行22: /usr/local/bin/kubectl: 权限不够
00:40:56 +08 command: [neopve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:56 +08 stdout: [neopve]
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
00:40:56 +08 command: [h170i]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubectl”
00:40:56 +08 stdout: [h170i]
/root/.bashrc:行22: /usr/local/bin/kubectl: 权限不够
/root/.bashrc:行22: /usr/local/bin/kubectl: 权限不够
00:40:56 +08 command: [ryzenpve]
sudo -E /bin/bash -c “tar -zxf /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin”
00:40:56 +08 command: [neopve]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubectl”
00:40:56 +08 stdout: [neopve]
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
00:40:57 +08 scp local file /root/.kube/kubekey/helm/v3.6.3/amd64/helm to remote /tmp/kubekey/usr/local/bin/helm success
00:40:57 +08 scp local file /root/.kube/kubekey/helm/v3.6.3/amd64/helm to remote /tmp/kubekey/usr/local/bin/helm success
00:40:57 +08 scp local file /root/.kube/kubekey/kube/v1.22.10/amd64/kubectl to remote /tmp/kubekey/usr/local/bin/kubectl success
00:40:58 +08 command: [h170i]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/usr/local/bin/helm /usr/local/bin/helm”
00:40:58 +08 command: [neopve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/usr/local/bin/helm /usr/local/bin/helm”
00:40:58 +08 command: [h170i]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:58 +08 command: [qm77prx]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/usr/local/bin/kubectl /usr/local/bin/kubectl”
00:40:58 +08 command: [qm77prx]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:58 +08 stdout: [qm77prx]
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
00:40:58 +08 command: [h170i]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/helm”
00:40:58 +08 command: [neopve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:58 +08 command: [qm77prx]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubectl”
00:40:58 +08 stdout: [qm77prx]
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
00:40:58 +08 command: [neopve]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/helm”
00:40:59 +08 scp local file /root/.kube/kubekey/cni/v0.9.1/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to remote /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz success
00:41:00 +08 scp local file /root/.kube/kubekey/cni/v0.9.1/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to remote /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz success
00:41:00 +08 scp local file /root/.kube/kubekey/helm/v3.6.3/amd64/helm to remote /tmp/kubekey/usr/local/bin/helm success
00:41:00 +08 command: [h170i]
sudo -E /bin/bash -c “tar -zxf /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin”
00:41:01 +08 command: [qm77prx]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/usr/local/bin/helm /usr/local/bin/helm”
00:41:01 +08 command: [neopve]
sudo -E /bin/bash -c “tar -zxf /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin”
00:41:01 +08 command: [qm77prx]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:01 +08 command: [qm77prx]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/helm”
00:41:03 +08 scp local file /root/.kube/kubekey/cni/v0.9.1/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to remote /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz success
00:41:04 +08 command: [qm77prx]
sudo -E /bin/bash -c “tar -zxf /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin”
00:41:04 +08 success: [pvesc]
00:41:04 +08 success: [ryzenpve]
00:41:04 +08 success: [h170i]
00:41:04 +08 success: [neopve]
00:41:04 +08 success: [qm77prx]
00:41:04 +08 [InstallKubeBinariesModule] Synchronize kubelet
00:41:04 +08 command: [pvesc]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 command: [ryzenpve]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 command: [h170i]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 command: [neopve]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 command: [qm77prx]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 success: [pvesc]
00:41:04 +08 success: [ryzenpve]
00:41:04 +08 success: [h170i]
00:41:04 +08 success: [neopve]
00:41:04 +08 success: [qm77prx]
00:41:04 +08 [InstallKubeBinariesModule] Generate kubelet service
00:41:04 +08 scp local file /root/.kube/kubekey/pvesc/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:04 +08 scp local file /root/.kube/kubekey/ryzenpve/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:04 +08 scp local file /root/.kube/kubekey/h170i/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:04 +08 command: [pvesc]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 scp local file /root/.kube/kubekey/neopve/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:05 +08 scp local file /root/.kube/kubekey/qm77prx/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:05 +08 command: [ryzenpve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 command: [pvesc]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 command: [h170i]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 command: [ryzenpve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 command: [h170i]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 command: [neopve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 command: [qm77prx]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 command: [neopve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 command: [qm77prx]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 success: [pvesc]
00:41:05 +08 success: [ryzenpve]
00:41:05 +08 success: [h170i]
00:41:05 +08 success: [neopve]
00:41:05 +08 success: [qm77prx]
00:41:05 +08 [InstallKubeBinariesModule] Enable kubelet service
00:41:06 +08 command: [pvesc]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [pvesc]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 command: [h170i]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [h170i]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 command: [ryzenpve]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [ryzenpve]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 command: [neopve]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [neopve]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 command: [qm77prx]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [qm77prx]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 success: [pvesc]
00:41:06 +08 success: [h170i]
00:41:06 +08 success: [ryzenpve]
00:41:06 +08 success: [neopve]
00:41:06 +08 success: [qm77prx]
00:41:06 +08 [InstallKubeBinariesModule] Generate kubelet env
00:41:07 +08 scp local file /root/.kube/kubekey/pvesc/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 scp local file /root/.kube/kubekey/ryzenpve/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 scp local file /root/.kube/kubekey/h170i/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 scp local file /root/.kube/kubekey/neopve/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 command: [pvesc]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:07 +08 scp local file /root/.kube/kubekey/qm77prx/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 command: [ryzenpve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:07 +08 command: [pvesc]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:07 +08 command: [h170i]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:07 +08 command: [ryzenpve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:07 +08 command: [h170i]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:08 +08 command: [neopve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:08 +08 command: [qm77prx]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:08 +08 command: [neopve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:08 +08 command: [qm77prx]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:08 +08 success: [pvesc]
00:41:08 +08 success: [ryzenpve]
00:41:08 +08 success: [h170i]
00:41:08 +08 success: [neopve]
00:41:08 +08 success: [qm77prx]
00:41:08 +08 [InitKubernetesModule] Generate kubeadm config
00:41:08 +08 command: [pvesc]
sudo -E /bin/bash -c “containerd config dump | grep SystemdCgroup”
00:41:08 +08 stdout: [pvesc]
SystemdCgroup = true
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 command: [pvesc]
sudo -E /bin/bash -c “containerd config dump | grep SystemdCgroup”
00:41:08 +08 stdout: [pvesc]
SystemdCgroup = true
00:41:08 +08 Set kubeletConfiguration: %vmap[cgroupDriver:systemd clusterDNS:[169.254.25.10] clusterDomain:cluster.local containerLogMaxFiles:3 containerLogMaxSize:5Mi evictionHard:map[memory.available:5% pid.available:5%] evictionMaxPodGracePeriod:120 evictionPressureTr ansitionPeriod:30s evictionSoft:map[memory.available:10%] evictionSoftGracePeriod:map[memory.available:2m] featureGates:map[CSIStorageCapacity:true ExpandCSIVolumes:true RotateKubeletServerCertificate:true TTLAfterFinished:true] kubeReserved:map[cpu:200m memory:250Mi] ma xPods:110 rotateCertificates:true systemReserved:map[cpu:200m memory:250Mi]]
00:41:09 +08 scp local file /root/.kube/kubekey/pvesc/kubeadm-config.yaml to remote /tmp/kubekey/etc/kubernetes/kubeadm-config.yaml success
00:41:09 +08 command: [pvesc]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/kubernetes/kubeadm-config.yaml /etc/kubernetes/kubeadm-config.yaml”
00:41:09 +08 command: [pvesc]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/*”
00:41:09 +08 skipped: [h170i]
00:41:09 +08 skipped: [ryzenpve]
00:41:09 +08 success: [pvesc]
00:41:09 +08 [InitKubernetesModule] Init cluster using kubeadm
00:45:11 +08 command: [pvesc]
sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl”
00:45:11 +08 stdout: [pvesc]
W0627 00:41:09.878828 34506 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [192.168.50.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.22.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [h170i h170i.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost neopve neopve.cluster.local pvesc pvesc.cluster.local qm77prx qm77prx .cluster.local ryzenpve ryzenpve.cluster.local sdb2640m sdb2640m.cluster.local] and IPs [192.168.50.1 192.168.50.6 127.0.0.1 192.168.50.10 192.168.50.20 192.168.50.23 192.168.50.40 192.168.50.253]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher
00:45:11 +08 stderr: [pvesc]
Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl”
W0627 00:41:09.878828 34506 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [192.168.50.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.22.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [h170i h170i.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost neopve neopve.cluster.local pvesc pvesc.cluster.local qm77prx qm77prx .cluster.local ryzenpve ryzenpve.cluster.local sdb2640m sdb2640m.cluster.local] and IPs [192.168.50.1 192.168.50.6 127.0.0.1 192.168.50.10 192.168.50.20 192.168.50.23 192.168.50.40 192.168.50.253]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1`