Ubuntu 部署 K8S 1.28.0 底层通过 docker 实现

Ubuntu 部署 K8S 1.28.0 底层通过 docker 实现

1 基础环境

节点规划:

IP hostname 版本
10.0.0.200 master Ubuntu 20.04
10.0.0.201 node1 Ubuntu 20.04
10.0.0.202 node2 Ubuntu 20.04

1.1 k8s 1.28.x 更新清单

Kubernetes v1.28 是 2023 年的第二个大版本更新,包含了 46 项主要的更新。 而今年发布的第一个版本 v1.27 有近 60 项,所以可以看出来,在发布节奏调整后, 每个 Kubernetes 版本中都会包含很多新的变化。 其中 20 个增强功能正在进入 Alpha 阶段,14 个将升级到 Beta 阶段,而另外 12 个则将升级到稳定版。 可以看出来很多都是新特性。 更多内容查看k8s 更新介绍 https://zhuanlan.zhihu.com/p/649838674

2 安装

2.1 设置hosts文件,实现主机名解析

1、在 master 节点上配置

[05:06:41 root@master ~]#vim /etc/hosts

#加上我们这个K8S的IP 和主机名
10.0.0.200 master
10.0.0.201 node1
10.0.0.202 node2

2、在 master 节点上实现 ssh 免密

[05:06:41 root@master ~]#ssh-keygen
[05:07:09 root@master ~]#ssh-copy-id 10.0.0.200
[05:07:09 root@master ~]#ssh-copy-id 10.0.0.201
[05:07:09 root@master ~]#ssh-copy-id 10.0.0.202

2.2 禁用 swap 分区

关闭swap分区,并且永久关闭虚拟内存,原因是我们的kubeadm在安装K8S的时候init初始化会去检测swap分区到底有么有关闭,因为如果开启了虚拟内存的话容器pod就有可能会放到虚拟内存里面去运行,会大大的降低他的工作效率、所以就会要求强制关闭、在生产当中建议把swap关闭,防止容器运行在虚拟内存的情况。

swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

2.3 调整内核参数,对于 K8S

1、调整内核优化参数。

modprobe br_netfilter

cat > kubernetes.conf <<EOF
# iptables 网桥模式开启
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
# 禁用 ipv6 协议
net.ipv6.conf.all.disable_ipv6=1
# 启用ipv4转发
net.ipv4.ip_forward=1
# net.ipv4.tcp_tw_recycle=0 #Ubuntu 没有参数
# 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.swappiness=0
# 不检查物理内存是否够用
vm.overcommit_memory=1
# 不启 OOM
vm.panic_on_oom=0
# 文件系统通知数(根据内存大小和空间大小配置)
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
# 文件件打开句柄数
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
# tcp keepalive 相关参数配置
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
# net.ipv4.ip_conntrack_max = 65536 # Ubuntu 没有参数
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 32768
EOF

cp kubernetes.conf /etc/sysctl.d/kubernetes.conf

3、手动刷新kubernetes.conf文件让他立马生效

sysctl -p /etc/sysctl.d/kubernetes.conf 

4、nftables 模式切换

# 在 Linux 中 nftables 当前可以作为内核 iptables 子系统的替代品,该工具可以充当兼容性层其行为类似于 iptables 但实际上是在配置 nftables。
apt list | grep "nftables/focal"
nftables/focal 0.9.3-2 amd64
python3-nftables/focal 0.9.3-2 amd64

# iptables 旧模式切换 (nftables 后端与当前的 kubeadm 软件包不兼容, 它会导致重复防火墙规则并破坏 kube-proxy, 所则需要把 iptables 工具切换到“旧版”模式来避免这些问题)
update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy

2.4 调整系统时间

# 设置系统时区为 中国/上海
timedatectl set-timezone Asia/Shanghai
# 将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog 

2.5 安装 ipvs

下面操作分别在三台节点上进行

sudo apt -y install ipvsadm ipset sysstat conntrack

mkdir ~/k8s-init/

#写入参数配置
tee ~/k8s-init/ipvs.modules <<'EOF'
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_lc
modprobe -- ip_vs_lblc
modprobe -- ip_vs_lblcr
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- ip_vs_dh
modprobe -- ip_vs_fo
modprobe -- ip_vs_nq
modprobe -- ip_vs_sed
modprobe -- ip_vs_ftp
modprobe -- ip_vs_sh
modprobe -- ip_tables
modprobe -- ip_set
modprobe -- ipt_set
modprobe -- ipt_rpfilter
modprobe -- ipt_REJECT
modprobe -- ipip
modprobe -- xt_set
modprobe -- br_netfilter
modprobe -- nf_conntrack
EOF

#加载内核配置(临时|永久)注意管理员执行 
#加载内核配置(临时|永久)注意管理员执行 
chmod 755 ~/k8s-init/ipvs.modules && sudo bash ~/k8s-init/ipvs.modules

sudo cp ~/k8s-init/ipvs.modules /etc/profile.d/ipvs.modules.sh

lsmod | grep -e ip_vs -e nf_conntrack

#ip_vs_ftp              16384  0
#nf_nat                 45056  1 ip_vs_ftp
#ip_vs_sed              16384  0
#ip_vs_nq               16384  0
#ip_vs_fo               16384  0
#ip_vs_dh               16384  0
#ip_vs_sh               16384  0
#ip_vs_wrr              16384  0
#ip_vs_rr               16384  0
#ip_vs_lblcr            16384  0
#ip_vs_lblc             16384  0
#ip_vs_lc               16384  0
#ip_vs                 155648  22 #ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
#nf_conntrack          139264  2 nf_nat,ip_vs
#nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
#nf_defrag_ipv4         16384  1 nf_conntrack
#libcrc32c              16384  5 nf_conntrack,nf_nat,btrfs,raid456,ip_vs

2.6 设置 rsyslogd 和 systemd journald 记录

sudo mkdir -pv /var/log/journal/ /etc/systemd/journald.conf.d/
sudo tee /etc/systemd/journald.conf.d/99-prophet.conf <<'EOF'
[Journal]
# 持久化保存到磁盘
Storage=persistent

# 压缩历史日志
Compress=yes

SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000

# 最大占用空间 10G
SystemMaxUse=10G

# 单日志文件最大 100M
SystemMaxFileSize=100M

# 日志保存时间 2 周
MaxRetentionSec=2week

# 不将日志转发到syslog
ForwardToSyslog=no
EOF

cp /etc/systemd/journald.conf.d/99-prophet.conf ~/k8s-init/journald-99-prophet.conf
sudo systemctl restart systemd-journald

2.7 安装 docker

# 1.卸载旧版本
sudo apt-get remove docker docker-engine docker.io containerd runc

# 2.更新apt包索引并安装包以允许apt在HTTPS上使用存储库
sudo apt-get install -y \
  apt-transport-https \
  ca-certificates \
  curl \
  gnupg-agent \
  software-properties-common

# 3.添加Docker官方GPG密钥 # -fsSL
curl https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

# 4.通过搜索指纹的最后8个字符进行密钥验证
sudo apt-key fingerprint 0EBFCD88

# 5.设置稳定存储库
sudo add-apt-repository \
  "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) \
  stable"

# 6.Install Docker Engine 默认最新版本
sudo apt-get update && sudo apt-get install -y docker-ce=5:19.03.15~3-0~ubuntu-focal docker-ce-cli=5:19.03.15~3-0~ubuntu-focal containerd.io

# 7.安装特定版本的Docker引擎,请在repo中列出可用的版本
# $apt-cache madison docker-ce
# docker-ce | 5:20.10.2~3-0~ubuntu-focal | https://download.docker.com/linux/ubuntu focal/stable amd64 Packages
# docker-ce | 5:18.09.1~3-0~ubuntu-xenial | https://download.docker.com/linux/ubuntu  xenial/stable amd64 Packages
# 使用第二列中的版本字符串安装特定的版本,例如:5:18.09.1~3-0~ubuntu-xenial。
# $sudo apt-get install docker-ce=<VERSION_STRING> docker-ce-cli=<VERSION_STRING> containerd.io

#9.加速器建立
mkdir -vp /etc/docker/
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://hpqoo1ip.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  }
}
EOF

# 9.自启与启动
sudo systemctl enable --now docker
sudo systemctl restart docker

# 10.退出登陆生效
exit

2.8 配置 cri-docker 环境

1、安装 cri-docker

# libcgroup 安装
apt install cgroup-tools -y

mkdir cri-docker

cd cri-docker

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4.amd64.tgz

tar xf cri-dockerd-0.3.4.amd64.tgz 

mv cri-dockerd/cri-dockerd  /usr/local/bin/

2、配置service和socker文件

# 配置 cri-docker.service
cat > /etc/systemd/system/cri-docker.service << EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify
ExecStart=/usr/local/bin/cri-dockerd --network-plugin=cni   --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

# 配置 cri-docker.socket
cat > /lib/systemd/system/cri-docker.socket << EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service

[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target
EOF

3、启动

systemctl daemon-reload   && systemctl enable cri-docker   && systemctl start cri-docker && systemctl enable --now cri-docker.socket 

2.8 安装 kubeadm

使用阿里云镜像源,安装kubeadm,可参考:https://developer.aliyun.com/mirror/kubernetes

apt-get update && apt-get install -y apt-transport-https

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

apt-get update

#安装最新版的kubelet kubeadm kubectl(二选一)
sudo apt-get install -y kubelet kubeadm kubectl

#安装指定版本的 kubelet kubeadm kubectl,我这里安装的是1.22.2的版本(二选一)
sudo apt-get install -y kubelet=1.28.2-00 kubeadm=1.28.2-00 kubectl=1.28.2-00

#设置开机自启动
systemctl enable --now kubelet.service

#查看版本
kubeadm version

2.9 准备镜像

1、查看所需镜像

kubeadm  config images list --kubernetes-version v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1

2、修改为阿里云镜像

cat images.sh 
#!/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.28.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.28.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.9-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1
# 执行脚本
bash images.sh

3 初始化 master 节点

1、初始化 master 节点(注意:只需在 master 节点操作)

添加–cri-socket=unix:///run/cri-dockerd.sock,指定为cri-docker

kubeadm init   --apiserver-advertise-address=10.0.0.200  --apiserver-bind-port=6443 --kubernetes-version=1.28.2  --pod-network-cidr=10.200.0.0/16 --service-cidr=172.30.0.0/24 --service-dns-domain=cluster.local  --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap   --cri-socket=unix:///run/cri-dockerd.sock

2、初始化成功提示

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.200:6443 --token wl1eid.1wefv8489utx0pvy \
    --discovery-token-ca-cert-hash sha256:e3acf5e927b12bb4ee7767202e31265fe309b6fecb681b85982eebd06aee68ea 

3、执行以下命令实现对 k8s-config 进行权限赋权

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

4、node 节点加入

注意:node 节点接入时也需要指定 cri-docker.sock

kubeadm join 10.0.0.200:6443 --token wl1eid.1wefv8489utx0pvy \
    --discovery-token-ca-cert-hash sha256:e3acf5e927b12bb4ee7767202e31265fe309b6fecb681b85982eebd06aee68ea --cri-socket=unix:///run/cri-dockerd.sock

5、查看节点已经全部接入

# kubectl get node
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   2m47s   v1.28.2
node1    NotReady   <none>          35s     v1.28.2
node2    NotReady   <none>          33s     v1.28.2

4 安装 Calico 网络插件

注意:以下操作均为 master 节点

1、下载 Calico 配置文件

curl https://docs.tigera.io/archive/v3.25/manifests/calico-etcd.yaml -O

2、修改 etcd 配置

最后一步的指定探测网卡我这里写的是 eth0, 你需要指定为自己系统中的主网卡名称

# 修改pod网络配置
POD_CIDR="10.200.0.0/16"
sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico-etcd.yaml

# 修改证书
sed -i 's/# \(etcd-.*\)/\1/' calico-etcd.yaml
etcd_key=$(cat /etc/kubernetes/pki/etcd/peer.key | base64 -w 0)
etcd_crt=$(cat /etc/kubernetes/pki/etcd/peer.crt | base64 -w 0)
etcd_ca=$(cat /etc/kubernetes/pki/etcd/ca.crt | base64 -w 0)
sed -i -e 's/\(etcd-key: \).*/\1'$etcd_key'/' \
    -e 's/\(etcd-cert: \).*/\1'$etcd_crt'/' \
    -e 's/\(etcd-ca: \).*/\1'$etcd_ca'/' calico-etcd.yaml

# 修改 etcd 地址
ETCD=$(grep 'advertise-client-urls' /etc/kubernetes/manifests/etcd.yaml | awk -F= '{print $2}')

# 查看 ETCD 变量获取的 ip https://10.0.0.200:2379
echo $ETCD
https://10.0.0.200:2379

sed -i -e 's@\(etcd_endpoints: \).*@\1"https://10.0.0.200:2379"@'     -e 's/\(etcd_.*:\).*#/\1/'     -e 's/replicas: 1/replicas: 2/' calico-etcd.yaml

# 指定探测网卡
sed '/autodetect/a\            - name: IP_AUTODETECTION_METHOD\n              value: "interface=eth0"' -i calico-etcd.yaml

3、创建 calico

kubectl apply -f calico-etcd.yaml

4、添加自动 K8S 补全命令

source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo 'source <(kubectl completion bash)' >> ~/.bashrc

5 验证

1、查看所有 pod、node 已经为 running 状态

$ kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-cffdcf85d-fvpxv   1/1     Running   0          4m25s
calico-kube-controllers-cffdcf85d-gvzf7   1/1     Running   0          41s
calico-node-fp7sx                         1/1     Running   0          4m25s
calico-node-qhg2d                         1/1     Running   0          4m25s
calico-node-zsnsf                         1/1     Running   0          4m25s
coredns-6554b8b87f-nrndg                  1/1     Running   0          21m
coredns-6554b8b87f-q2h4q                  1/1     Running   0          21m
etcd-master                               1/1     Running   0          21m
kube-apiserver-master                     1/1     Running   0          21m
kube-controller-manager-master            1/1     Running   0          21m
kube-proxy-c6rh2                          1/1     Running   0          19m
kube-proxy-df7d6                          1/1     Running   0          19m
kube-proxy-p7ngq                          1/1     Running   0          21m
kube-scheduler-master                     1/1     Running   0          21m

$ kubectl get node 
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   22m   v1.28.2
node1    Ready    <none>          20m   v1.28.2
node2    Ready    <none>          20m   v1.28.2

6 测试在 K8S 集群中运行 nginx 验证集群可用性

1、启动 nginx

$ kubectl create deployment nginx --image=nginx

$ kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7854ff8877-2vdtk   1/1     Running   0          41s

2、暴露 pod 端口提供外部访问

kubectl expose deployment nginx --port=80 --type=NodePort

# 对外暴露 32668
kubectl get svc nginx 
NAME    TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx   NodePort   172.30.0.155   <none>        80:32668/TCP   12s

3、浏览器访问

暂无评论

发送评论 编辑评论


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇
下一篇