Skip to content

1.28.4 升级和更新到 1.29.0 的操作步骤

标签
计算机/操作系统/Linux
操作系统/Debian
操作系统/Unix
运维/云原生/Kubernetes/K8s
运维/云原生/Kubernetes
命令行/kubectl
软件/云原生/kubectl
命令行/kubeadm
软件/云原生/kubeadm
软件/云原生/kubelet
命令行/apt
计算机/网络/Cilium
运维/Cilium
命令行/cilium
软件/云原生/Cilium
操作系统/Debian/Debian-12
字数
1471 字
阅读时间
9 分钟

先决条件

检查 kubeletkubeadmkubectl 的依赖状态

我们先检查一下当前依赖对应的 Repository 配置是否正确:

shell
sudo apt info kubelet
shell
sudo apt info kubeadm
shell
sudo apt info kubectl

一般而言,1.29 的集群对应的 apt Repository 应该都是

https://pkgs.k8s.io/core:/stable:/v1.29/deb

升级第一个控制平面节点的 kubeadm

请先参考通过 apt 升级和更新 1.29.0 版本的 kubeadm 完成这部分的步骤。

准备 kubeadm 升级计划

shell
sudo kubeadm upgrade plan

它会有这样的输出

shell
$ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.28.4
[upgrade/versions] kubeadm version: v1.29.0
[upgrade/versions] Target version: v1.29.0
[upgrade/versions] Latest version in the v1.28 series: v1.28.5

W1221 21:30:35.414895   12861 configset.go:78] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" not found
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     3 x v1.28.4   v1.28.5

Upgrade to the latest version in the v1.28 series:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.28.4   v1.28.5
kube-controller-manager   v1.28.4   v1.28.5
kube-scheduler            v1.28.4   v1.28.5
kube-proxy                v1.28.4   v1.28.5
CoreDNS                   v1.10.1   v1.11.1
etcd                      3.5.9-0   3.5.10-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.28.5

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     3 x v1.28.4   v1.29.0

Upgrade to the latest stable version:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.28.4   v1.29.0
kube-controller-manager   v1.28.4   v1.29.0
kube-scheduler            v1.28.4   v1.29.0
kube-proxy                v1.28.4   v1.29.0
CoreDNS                   v1.10.1   v1.11.1
etcd                      3.5.9-0   3.5.10-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.29.0

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   -                 v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

准备无误之后就可以升级了。

应用 kubeadm 升级

shell
sudo kubeadm upgrade apply v1.29.0

输出就像这样:

shell
$ sudo kubeadm upgrade apply v1.29.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1221 21:30:57.403724   12875 configset.go:78] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" not found
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.29.0"
[upgrade/versions] Cluster version: v1.28.4
[upgrade/versions] kubeadm version: v1.29.0
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.29.0" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-21-21-35-14/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests2962562190"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-21-21-35-14/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-21-21-35-14/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-21-21-35-14/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config2416135083/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
W1221 21:38:14.892463   12875 postupgrade.go:188] the ConfigMap "kube-proxy" in the namespace "kube-system" was not found. Assuming that kube-proxy was not deployed for this cluster. Note that once 'kubeadm upgrade apply' supports phases you will have to skip the kube-proxy upgrade manually

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.29.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

升级其他控制平面节点的 kubeadm

和之前升级第一个控制平面节点不同的是,我们需要用下面的命令来升级其他剩下的控制平面节点:

shell
sudo kubeadm upgrade node

逐步升级所有控制平面的节点的 kubeletkubectl

驱逐 Pod

shell
sudo kubectl drain <node-to-drain> --ignore-daemonsets

对于我而言的话,按步骤执行

shell
sudo kubectl drain node1 --ignore-daemonsets

就好了。

请先参考通过 apt 升级和更新 1.29.0 版本的 kubelet 和 kubectl 完成这部分的步骤。

取消驱逐

shell
sudo kubectl uncordon node1

检查节点状态,看看版本是否对齐:

shell
$ sudo kubectl get nodes
NAME    STATUS   ROLES           AGE   VERSION
node1   Ready    control-plane   71d   v1.29.0
node2   Ready    <none>          71d   v1.28.4
node3   Ready    <none>          71d   v1.28.4

升级负载节点

升级 kubeadm

请先参考通过 apt 升级和更新 1.29.0 版本的 kubeadm 完成这部分的步骤。

shell
sudo kubeadm upgrade node

驱逐 Pod

shell
sudo kubectl drain node2 --ignore-daemonsets

升级 kubeletkubectl

请先参考通过 apt 升级和更新 1.29.0 版本的 kubelet 和 kubectl 完成这部分的步骤。

取消驱逐

sudo kubectl uncordon node2

检查节点状态,看看版本是否对齐:

shell
$ sudo kubectl get nodes
NAME    STATUS   ROLES           AGE   VERSION
node1   Ready    control-plane   71d   v1.29.0
node2   Ready    <none>          71d   v1.29.0
node3   Ready    <none>          71d   v1.29.0

状态为 Ready 就表示完成了。

贡献者

页面历史

撰写