1 说明
本文系搭建 kubernetes v1.21.3 版本集群笔记,使用三台虚拟机作为 CentOS7.9 系统测试机,安装 kubeadm、kubelet、kubectl 均使用 yum 安装,网络组件选用的是 flannel。
2 环境准备
部署集群没有特殊说明均使用 root 用户执行命令。
2.1 硬件信息
IP | hostname | mem | disk | explain |
---|
192.168.4.120 | centos79-node1 | 4GB | 30GB | k8s 控制平面节点 |
192.168.4.121 | centos79-node2 | 4GB | 30GB | k8s 执行节点1 |
192.168.4.123 | centos79-node3 | 4GB | 30GB | k8s 执行节点2 |
2.2 软件信息
software | version |
---|
CentOS | CentOS Linux release 7.9.2009 (Core) |
Kubernetes | 1.21.3 |
Docker | 20.10.8 |
Kernel | 5.4.138-1.el7.elrepo.x86_64 |
2.3 保证环境正确性
purpose | commands |
---|
保证集群各节点互通 | ping -c 3 <ip> |
保证MAC地址唯一 | ip link 或 ifconfig -a |
保证集群内主机名唯一 | 查询 hostnamectl status ,修改 hostnamectl set-hostname <hostname> |
保证系统产品uuid唯一 | dmidecode -s system-uuid 或 sudo cat /sys/class/dmi/id/product_uuid |
修改MAC地址参考命令:
ifconfig eth0 down
ifconfig eth0 hw ether 00:0c:29:84:fd:a4
ifconfig eth0 up
如 product_uuid 不唯一,请考虑重新安装CentOS。
2.4 确保端口开放正常
cetnos79-node1
节点端口检查:
Protocol | Direction | Port Range | Purpose |
---|
TCP | Inbound | 6443* | Kube-apiserver |
TCP | Inbound | 2379-2380 | Etcd API |
TCP | Inbound | 10250 | Kubelet API |
TCP | Inbound | 10251 | Kube-scheduler |
TCP | Inbound | 10252 | Kube-controller-manager |
centos79-node2
、centos79-node3
节点端口检查:
Protocol | Direction | Port Range | Purpose |
---|
TCP | Inbound | 10250 | Kubelet api |
TCP | Inbound | 30000-32767 | NodePort Service |
2.5 配置主机互信
配置hosts解析:
cat >> /etc/hosts <<EOF
192.168.4.120 centos79-node1
192.168.4.121 centos79-node2
192.168.4.123 centos79-node3
EOF
在 centos79-node1
生成ssh密钥,并分发到各个节点:
# 生成ssh密钥,直接一路回车
ssh-keygen -t rsa
# 复制刚刚生成的密钥到各节点可信列表中,需分别输入各主机密码
ssh-copy-id root@centos79-node1
ssh-copy-id root@centos79-node2
ssh-copy-id root@centos79-node3
2.6 禁用 swap
swap 仅当内存不够时会使用硬盘块充当额外内存,硬盘的 io 较内存差距极大,禁用 swap 以提高性能各节点均需执行:
swapoff -a
cp /etc/fstab /etc/fstab.bak
cat /etc/fstab.bak | grep -v swap > /etc/fstab
2.7 关闭 SELinux
关闭 SELinux,否则 kubelet 挂载目录时可能报错 Permission denied
,可以设置为 permissive
或 disabled
,permissive
会提示 warn 信息各节点均需执行:
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
2.8 设置时区、同步时间
timedatectl set-timezone Asia/Shanghai
systemctl enable --now chronyd
查看同步状态:
timedatectl status
# 将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog && systemctl restart crond
2.9 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
2.10 修改内核参数
cp /etc/sysctl.conf{,.bak}
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
echo "vm.swappiness = 0" >> /etc/sysctl.conf
modprobe br_netfilter
sysctl -p
2.11 开启IPVS支持
vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
chmod 755 /etc/sysconfig/modules/ipvs.modules
sh /etc/sysconfig/modules/ipvs.modules
lsmod | grep ip_vs
2.12 升级内核版本
参考链接
3 部署 Docker
所有节点均需要安装 Docker。
3.1 添加 Docker yum 源
# 安装必要依赖
yum install -y yum-utils device-mapper-persistent-data lvm2
# 添加 aliyun docker-ce yum 源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 重建 yum 缓存
yum makecache fast
3.2 安装 Docker
# 查看可用 docker 版本
yum list docker-ce.x86_64 --showduplicates | sort -r
已加载插件:fastestmirror
已安装的软件包
可安装的软件包
Loading mirror speeds from cached hostfile
* elrepo: mirrors.tuna.tsinghua.edu.cn
docker-ce.x86_64 3:20.10.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.8-3.el7 @docker-ce-stable
docker-ce.x86_64 3:20.10.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.15-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.14-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.13-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.12-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.11-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.10-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
# 安装指定版本 Docker
yum install -y docker-ce-20.10.8-3.el7
这里以安装 20.10.8
版本举例,注意版本号不包含 :
与之前的数字。
3.3 确保网络模块开机自动加载
lsmod | grep overlay
lsmod | grep br_netfilter
若上面命令无返回值输出或提示文件不存在,需执行以下命令:
cat > /etc/modules-load.d/docker.conf <<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
3.4 使桥接流量对 iptables 可见
各个节点均需执行:
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
验证是否生效,均返回 1
即正确。
sysctl -n net.bridge.bridge-nf-call-iptables
sysctl -n net.bridge.bridge-nf-call-ip6tables
3.5 配置 Docker
mkdir /etc/docker
修改 cgroup 驱动为 systemd [k8s官方推荐]、限制容器日志量、修改存储类型,最后的 docker 家目录可修改:
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"registry-mirrors": ["https://gp8745ui.mirror.aliyuncs.com"],
"data-root": "/data/docker"
}
EOF
服务脚本第 13 行修改:
vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --default-ulimit core=0:0
systemctl daemon-reload
添加开机自启,立即启动:
systemctl enable --now docker
3.6 验证 Docker 是否正常
# 查看docker信息,判断是否与配置一致
docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.6.1-docker)
scan: Docker Scan (Docker Inc., v0.8.0)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.8
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: e25210fe30a0a703442421b0f60afac609f950a3
runc version: v1.0.1-0-g4144b63
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 5.4.138-1.el7.elrepo.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.846GiB
Name: centos79-node1
ID: GFMO:BC7P:5L4S:JACH:EX5I:L6UM:AINU:A3SE:E6B6:ZLBQ:UBPG:QV7O
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
# hello-docker 测试
docker run --rm hello-world
# 删除测试 image
docker rmi hello-world
3.7 添加用户到 Docker 组
对于非 root 用户,无需 sudo 即可使用 docker 命令。
# 添加用户到 docker 组
usermod -aG docker <USERNAME>
# 当前会话立即更新 docker 组
newgrp docker
4 部署 Kubernetes 集群
如未说明,各节点均需执行如下步骤:
4.1 添加 kubernetes 源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 重建yum缓存,输入y添加证书认证
yum makecache fast
4.2 安装 kubeadm、kubelet、kubectl
- 各节点均需安装
kubeadm
、kubelet
kubectl
仅 centos79-node1
节点需安装(作为 worker 节点,kubectl 无法使用,可以不装)
yum list docker-ce.x86_64 --showduplicates | sort -r
version=1.21.3-0
yum install -y kubelet-${version} kubeadm-${version} kubectl-${version}
systemctl enable kubelet
4.3 配置自动补全命令
# 安装 bash 自动补全插件
yum install bash-completion -y
# 设置 kubectl 与 kubeadm 命令补全,下次 login 生效
kubectl completion bash >/etc/bash_completion.d/kubectl
kubeadm completion bash > /etc/bash_completion.d/kubeadm
4.4 为 Docker 设定使用的代理服务(暂跳过该步骤,由阿里云镜像解决)
Kubeadm 部署 Kubernetes 集群的过程中,默认使用 Google 的 Registry 服务 k8s.gcr.io
上的镜像,例如k8s.grc.io/kube-apiserver
等,但国内无法访问到该服务。必要时,可自行设置合适的代理来获取相关镜像,或者从 Dockerhub 上下载镜像至本地后自行对镜像打标签。
这里简单说明一下设置代理服务的方法。编辑 /lib/systemd/system/docker.service
文件,在 [Service]
配置段中添加类似如下内容,其中的 PROXY_SERVER_IP
和 PROXY_PORT
要按照实际情况修改。
Environment="HTTP_PROXY=http://$PROXY_SERVER_IP:$PROXY_PORT"
Environment="HTTPS_PROXY=https://$PROXY_SERVER_IP:$PROXY_PORT"
Environment="NO_PROXY=192.168.4.0/24"
配置完成后需要重载 systemd,并重新启动 docker 服务:
systemctl daemon-reload
systemctl restart docker.service
需要特别说明的是,由 kubeadm 部署的 Kubernetes 集群上,集群核心组件 kube-apiserver、kube-controller-manager、kube-scheduler 和 etcd 等均会以静态 Pod 的形式运行,它们所依赖的镜像文件默认来自于 k8s.gcr.io 这一 Registry 服务之上。但我们无法直接访问该服务,常用的解决办法有如下两种,本示例将选择使用更易于使用的前一种方式:
- 使用能够到达该服务的代理服务
- 使用国内的镜像服务器上的服务,例如
gcr.azk8s.cn/google_containers
和 registry.aliyuncs.com/google_containers
等(经测试,v1.22.0 版本已停用)
4.5 查看指定 k8s 版本需要哪些镜像
kubeadm config images list --kubernetes-version v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
4.6 拉取镜像
vim pullimages.sh
#!/bin/bash
# pull images
ver=v1.21.3
registry=registry.cn-hangzhou.aliyuncs.com/google_containers
images=`kubeadm config images list --kubernetes-version=$ver |awk -F '/' '{print $2}'`
for image in $images
do
if [ $image != coredns ];then
docker pull ${registry}/$image
if [ $? -eq 0 ];then
docker tag ${registry}/$image k8s.gcr.io/$image
docker rmi ${registry}/$image
else
echo "ERROR: 下载镜像报错,$image"
fi
else
docker pull coredns/coredns:1.8.0
docker tag coredns/coredns:1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0
docker rmi coredns/coredns:1.8.0
fi
done
chmod +x pullimages.sh && ./pullimages.sh
拉取完成,执行 docker images
查看镜像:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.21.3 3d174f00aa39 3 weeks ago 126MB
k8s.gcr.io/kube-scheduler v1.21.3 6be0dc1302e3 3 weeks ago 50.6MB
k8s.gcr.io/kube-proxy v1.21.3 adb2816ea823 3 weeks ago 103MB
k8s.gcr.io/kube-controller-manager v1.21.3 bc2bb319a703 3 weeks ago 120MB
k8s.gcr.io/pause 3.4.1 0f8457a4c2ec 6 months ago 683kB
k8s.gcr.io/coredns/coredns v1.8.0 296a6d5035e2 9 months ago 42.5MB
k8s.gcr.io/etcd 3.4.13-0 0369cf4303ff 11 months ago 253MB
导出镜像,copy 到其它节点:
docker save $(docker images | grep -v REPOSITORY | awk 'BEGIN{OFS=":";ORS=" "}{print $1,$2}') -o k8s-images.tar
scp k8s-images.tar root@centos79-node2:~
scp k8s-images.tar root@centos79-node3:~
在其它节点导入:
docker load -i k8s-images.tar
4.7 修改 kubelet 配置默认 cgroup driver
mkdir /var/lib/kubelet
cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
4.8 初始化 master 节点
仅 centos79-node1
节点需要执行此步骤。
4.8.1 生成 kubeadm 初始化配置文件
[可选] 仅当需自定义初始化配置时用。
kubeadm config print init-defaults > kubeadm-config.yaml
修改配置文件:
localAPIEndpoint:
advertiseAddress: 1.2.3.4
# 替换为:
localAPIEndpoint:
advertiseAddress: 192.168.4.120
name: centos79-node1
kubernetesVersion: 1.21.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
# 替换为:
kubernetesVersion: 1.21.3
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
4.8.2 测试环境是否正常
kubeadm init phase preflight
I0810 13:46:36.581916 20512 version.go:254] remote version is much newer: v1.22.0; falling back to: stable-1.21
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
4.8.3 初始化 master
10.244.0.0/16 是 flannel 固定使用的 IP 段,设置取决于网络组件要求。
kubeadm init --config=kubeadm-config.yaml --ignore-preflight-errors=2 --upload-certs | tee kubeadm-init.log
输出如下:
W0810 14:55:25.741990 13062 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"InitConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "name"
[init] Using Kubernetes version: v1.21.3
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "node" could not be reached
[WARNING Hostname]: hostname "node": lookup node on 223.5.5.5:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node] and IPs [10.96.0.1 192.168.4.120]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node] and IPs [192.168.4.120 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node] and IPs [192.168.4.120 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.503592 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
fceedfd1392b27957c5f6345661d62dc09359b61e07f76f444a9e3095022dab4
[mark-control-plane] Marking the node node as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.4.120:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6ad6978a7e72cfae06c836886276634c87bedfa8ff02e44f574ffb96435b4c2b
4.8.4 为日常使用集群的用户添加 kubectl 使用权限
su - iuskye
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/admin.conf
sudo chown $(id -u):$(id -g) $HOME/.kube/admin.conf
echo "export KUBECONFIG=$HOME/.kube/admin.conf" >> ~/.bashrc
exit
4.8.5 配置 master 认证
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile
. /etc/profile
如果不配置这个,会提示如下输出:The connection to the server localhost:8080 was refused - did you specify the right host or port?
此时 master 节点已经初始化成功,但是还未安装网络组件,还无法与其他节点通讯。
4.8.6 安装网络组件
以 flannel 为例:
curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml # 这里下载镜像非常慢,我还是先手动拉下来吧,不行就多试几次
docker pull quay.io/coreos/flannel:v0.14.0
kubectl apply -f kube-flannel.yml
4.8.7 查看 centos79-node1 节点状态
kubectl get nodes
NAME STATUS ROLES AGE VERSION
centos79-node2 NotReady <none> 7m29s v1.21.3
centos79-node3 NotReady <none> 7m15s v1.21.3
node Ready control-plane,master 33m v1.21.3
如果 STATUS
提示 NotReady
,可以通过 kubectl describe node centos79-node2
查看具体的描述信息,性能差的服务器到达 Ready
状态时间会长些。
4.9 初始化 node 节点并加入集群
4.9.1 获取加入 kubernetes 的命令
访问 centos79-node1
输入创建新 token 命令:
kubeadm token create --print-join-command
同时输出加入集群的命令:
kubeadm join 192.168.4.120:6443 --token 8dj8i5.6jua6ogqvve1ci5u --discovery-token-ca-cert-hash sha256:6ad6978a7e72cfae06c836886276634c87bedfa8ff02e44f574ffb96435b4c2b
这个 token 也可以使用上述 master 上执行的初始化输出结果。
4.9.2 在 node 节点上执行加入集群的命令
kubeadm join 192.168.4.120:6443 --token 8dj8i5.6jua6ogqvve1ci5u --discovery-token-ca-cert-hash sha256:6ad6978a7e72cfae06c836886276634c87bedfa8ff02e44f574ffb96435b4c2b
[preflight] Running pre-flight checks
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
4.10 查看集群节点状态
kubectl get nodes
NAME STATUS ROLES AGE VERSION
centos79-node2 NotReady <none> 7m29s v1.21.3
centos79-node3 NotReady <none> 7m15s v1.21.3
node Ready control-plane,master 33m v1.21.3
发现 node 节点状态为 NotReady,别着急,等几分钟就好了:
NAME STATUS ROLES AGE VERSION
centos79-node2 Ready <none> 8m29s v1.21.3
centos79-node3 Ready <none> 8m15s v1.21.3
node Ready control-plane,master 34m v1.21.3
4.11 部署 Dashboard
4.11.1 部署
curl -o recommended.yaml https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
默认 Dashboard 只能集群内部访问,修改 Service 为 NodePort 类型,暴露到外部:
vi recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001
type: NodePort
selector:
k8s-app: kubernetes-dashboard
kubectl apply -f recommended.yaml # 这里下载镜像非常慢,我还是先手动拉下来吧,不行就多试几次
docker pull kubernetesui/dashboard:v2.3.1
docker pull kubernetesui/metrics-scraper:v1.0.6
kubectl apply -f recommended.yaml
kubectl get pods,svc -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-856586f554-nb68k 0/1 ContainerCreating 0 52s
pod/kubernetes-dashboard-67484c44f6-shtz7 0/1 ContainerCreating 0 52s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.96.188.208 <none> 8000/TCP 52s
service/kubernetes-dashboard NodePort 10.97.164.152 <none> 443:30001/TCP 53s
查看状态正在创建容器中,稍后再次查看:
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-856586f554-nb68k 1/1 Running 0 2m11s
pod/kubernetes-dashboard-67484c44f6-shtz7 1/1 Running 0 2m11s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.96.188.208 <none> 8000/TCP 2m11s
service/kubernetes-dashboard NodePort 10.97.164.152 <none> 443:30001/TCP 2m12s
访问地址:https://NodeIP:30001
;使用 Firefox 浏览器,Chrome 浏览器打不开不信任 SSL 证书的网站。
创建 service account 并绑定默认 cluster-admin 管理员集群角色:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name: dashboard-admin-token-q2kjk
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: fa1e812e-4487-4288-a444-d4ba49711366
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlJ4OWQ5ZUJ5MDlEMkdQSnBYeUtXZDg5M2ZjX090RkhPOUtQZ3JTc1B0Z0UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcTJramsiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZmExZTgxMmUtNDQ4Ny00Mjg4LWE0NDQtZDRiYTQ5NzExMzY2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.nCpdYK5SjhAI8wqDP6QEDx9dyD4n5yCrx8eZ3R5XkR99vo8diMFdL_6VHtiQekQpwVc7vCkQ0qYhpaGjD2Pzn4EpU44UhQFH5EpG4L5zYvQf6QHBgaZJ68dQe1nMUUMto2jbTq8lEBt3FsJT_If6TkfeHtwfR-X8D2Nm1M8E153hXUPycSbGZImPeE-JVqRC3IJuhv6xgYi-EE08va2d6kDd4MBm-XdCm7QweG5cZaCQAP1qqF8kPfNZzelAGDe6F8V2caxAUECpNE6e4ZW2-h0D7Hp4bZpM4hZZpVr6WCfxuKXwPd-2srorjLi8h_lqSdZCJKJ56TpsED6nkBRffg
获得 token:
eyJhbGciOiJSUzI1NiIsImtpZCI6IlJ4OWQ5ZUJ5MDlEMkdQSnBYeUtXZDg5M2ZjX090RkhPOUtQZ3JTc1B0Z0UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcTJramsiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZmExZTgxMmUtNDQ4Ny00Mjg4LWE0NDQtZDRiYTQ5NzExMzY2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.nCpdYK5SjhAI8wqDP6QEDx9dyD4n5yCrx8eZ3R5XkR99vo8diMFdL_6VHtiQekQpwVc7vCkQ0qYhpaGjD2Pzn4EpU44UhQFH5EpG4L5zYvQf6QHBgaZJ68dQe1nMUUMto2jbTq8lEBt3FsJT_If6TkfeHtwfR-X8D2Nm1M8E153hXUPycSbGZImPeE-JVqRC3IJuhv6xgYi-EE08va2d6kDd4MBm-XdCm7QweG5cZaCQAP1qqF8kPfNZzelAGDe6F8V2caxAUECpNE6e4ZW2-h0D7Hp4bZpM4hZZpVr6WCfxuKXwPd-2srorjLi8h_lqSdZCJKJ56TpsED6nkBRffg
这里需要注意粘贴的时候有可能被换行,如果被换行,可在记事本中设置为一行。
使用输出的 token 登录 Dashboard。
4.11.2 登录界面


4.11.3 Pods

4.11.4 Service

4.11.5 Config Maps

4.11.6 Secrets

4.11.7 Cluster Role Bindings

4.11.8 NameSpace

5 笔者提供资源
docker pull registry.cn-beijing.aliyuncs.com/iuskye/kube-apiserver:v1.21.3
docker pull registry.cn-beijing.aliyuncs.com/iuskye/kube-scheduler:v1.21.3
docker pull registry.cn-beijing.aliyuncs.com/iuskye/kube-proxy:v1.21.3
docker pull registry.cn-beijing.aliyuncs.com/iuskye/kube-controller-manager:v1.21.3
docker pull registry.cn-beijing.aliyuncs.com/iuskye/coredns:v1.8.0
docker pull registry.cn-beijing.aliyuncs.com/iuskye/etcd:3.4.13-0
docker pull registry.cn-beijing.aliyuncs.com/iuskye/pause:3.4.1
docker pull registry.cn-beijing.aliyuncs.com/iuskye/dashboard:v2.3.1
docker pull registry.cn-beijing.aliyuncs.com/iuskye/metrics-scraper:v1.0.6
docker pull registry.cn-beijing.aliyuncs.com/iuskye/flannel:v0.14.0
Retag:
docker tag registry.cn-beijing.aliyuncs.com/iuskye/kube-apiserver:v1.21.3 k8s.gcr.io/kube-apiserver:v1.21.3
docker tag registry.cn-beijing.aliyuncs.com/iuskye/kube-scheduler:v1.21.3 k8s.gcr.io/kube-scheduler:v1.21.3
docker tag registry.cn-beijing.aliyuncs.com/iuskye/kube-proxy:v1.21.3 k8s.gcr.io/kube-proxy:v1.21.3
docker tag registry.cn-beijing.aliyuncs.com/iuskye/kube-controller-manager:v1.21.3 k8s.gcr.io/kube-controller-manager:v1.21.3
docker tag registry.cn-beijing.aliyuncs.com/iuskye/coredns:v1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0
docker tag registry.cn-beijing.aliyuncs.com/iuskye/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag registry.cn-beijing.aliyuncs.com/iuskye/pause:3.4.1 k8s.gcr.io/pause:3.4.1
docker tag registry.cn-beijing.aliyuncs.com/iuskye/dashboard:v2.3.1 kubernetesui/dashboard:v2.3.1
docker tag registry.cn-beijing.aliyuncs.com/iuskye/metrics-scraper:v1.0.6 kubernetesui/metrics-scraper:v1.0.6
docker tag registry.cn-beijing.aliyuncs.com/iuskye/flannel:v0.14.0 quay.io/coreos/flannel:v0.14.0
6 参考