K8S入门系列:K8S 1.21 Ubuntu 二进制高可用部署

如果在生产中 etcd 最好使用 ssd 磁盘,因为 K8S 会频繁的去查询 etcd 里面的数据,我们之所以不用 kubeadm 是因为 kubeadm 是通过容器启动管理起来相对麻烦

这里我通过 Ansible 进行批量部署,ETCD 我们使用 3 个服务器,实现数据的高可用

2.1 环境准备

类型 服务器IP地址 备注
Ansible(2台) 10.0.0.101/102 K8S集群部署服务器,可以和在一起
K8S Master(2台) 10.0.0.101/102 K8s控制端,通过一个VIP做主备高可用
Harbor(一台) 10.0.0.105 高可用镜像服务器
Etcd(最少3台) 10.0.0.106/107/108 保存k8s集群数据的服务器
Haproxy+keepalived(两台) 10.0.0.103/104 高可用etcd代理服务器
Node节点(2-N台) 10.0.0.109/110 真正运行容器的服务器,高可用环境至少两台

Ansible 两台部署在 k8s 集群的 master 主机上分别部署一台,安装 ansible 是为了更好的快速部署各个服务。

K8S 集群 master1:10.0.0.101

K8S 集群 master2:10.0.0.102

Haproxy 和 keepalived 复用一台主机:10.0.0.103

Haproxy 和 keepalived 复用一台主机:10.0.0.104

Harbor:10.0.0.105

Etcd-1:10.0.0.106

Etcd-2:10.0.0.107

Etcd-3:10.0.0.108

Node-1:10.0.0.109

Node-2:10.0.0.110

注意:etcd集群必须是1,3,5,7…奇数个节点,我们一般是三个来做高可用。

服务器最低硬件配置:

Master节点(两台):3G运行内存2个CPU

Harbor:1G运行内存1个CPU

Node节点(两台):4G运行内存2个CPU

Etcd节点(三台):1.5G运行内存1CPU

Haproxy(两台):512M运行内存1CPU

主机名设置

类型 服务器IP 主机名 VIP
K8S Master1 10.0.0.101 k8s-master1 10.0.0.188
K8S Master2 10.0.0.102 k8s-master2 10.0.0.188
Harbor1 10.0.0.103 k8s-harbor1
Node节点1 10.0.0.104 k8s-node1
Node节点2 10.0.0.105 k8s-node2
etcd节点1 10.0.0.106 k8s-etcd1
etcd节点2 10.0.0.107 k8s-etcd2
etcd节点3 10.0.0.108 k8s-etcd3
Haproxy1 10.0.0.109 k8s-haproxy1
Haproxy2 10.0.0.110 k8s-haproxy2

软件版本清单

见当前目录下 kubernetes软件清单API端口:

端口:10.0.0.188:6443 #需要配置在负载均衡上实现反向代理,dashboard的端口为8443

操作系统:ubuntu server 20.04.3

k8s版本: 1.21.1

项目地址:

这个项目用于 二进制部署 K8S

https://github.com/easzlab/kubeasz

2.2 K8S 手动二进制部署

2.2.1 部署 docker

需要部署 docker 的节点 master1、master2、harbor、node1、node2

这里我只用了 harbor 来做演示,其他几个节点以此类推

1、解决依赖

[19:39:41 root@harbor ~]#sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common

2、安装 GPG 证书

[19:39:54 root@harbor ~]#curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -

3、写入docker – ce 软件源信息

[19:40:20 root@harbor ~]#sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

4、更新软件

[19:40:59 root@harbor ~]#apt-get -y update

5、安装 docker 5:19.03.9~3-0~ubuntu-bionic 版本

[19:41:31 root@harbor ~]#apt-get -y install docker-ce=5:19.03.9~3-0~ubuntu-bionic docker-ce-cli=5:19.03.9~3-0~ubuntu-bionic

6、启动docker 并设置为开机启动

[19:44:11 root@harbor ~]#systemctl start docker && systemctl enable docker

2.2.2 搭建 harbor 部署

部署docker证书,让我们的各个节点都能够到harbor上去拉取下载镜像

1.安装 docker-compose

[15:52:21 root@harbor ~]#wget https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64

[15:54:34 root@harbor ~]#mv docker-compose-Linux-x86_64 /usr/local/bin/docker-compose

[15:54:38 root@harbor ~]#chmod a+x /usr/local/bin/docker-compose

[15:54:41 root@harbor ~]#docker-compose --version
docker-compose version 1.29.2, build 5becea4c

2.2.2.2 搭建 harbor

1.解压 harbor

# 在 harbor 节点上操作

[15:22:41 root@harbor ~]#cd /usr/local/src/

[15:28:11 root@harbor src]#wget https://github.com/goharbor/harbor/releases/download/v2.1.6/harbor-offline-installer-v2.1.6.tgz

[15:29:16 root@harbor src]#tar xf harbor-offline-installer-v2.1.6.tgz 

2.签发证书

[15:29:27 root@harbor src]#cd harbor/

# 创建一个certs证书存储文件
[17:02:46 root@harbor harbor]#mkdir certs/

# 生成私钥key
[17:02:51 root@harbor harbor]#openssl genrsa -out /usr/local/src/harbor/certs/harbor-ca.key

# 签发证书
[17:02:55 root@harbor harbor]#openssl req -x509 -new -nodes -key /usr/local/src/harbor/certs/harbor-ca.key -subj "/CN=hub.zhangguiyuan.com" -days 7120 -out /usr/local/src/harbor/certs/harbor-ca.crt

3.修改配置

[16:05:16 root@harbor harbor]#vim harbor.yml.tmpl
  hostname: hub.zhangguiyuan.com
  certificate: /usr/local/src/harbor/certs/harbor-ca.crt
  private_key: /usr/local/src/harbor/certs/harbor-ca.key
  harbor_admin_password: 123456

# 拷贝为 harbor.yml 文件
[16:15:29 root@harbor harbor]#cp harbor.yml.tmpl harbor.yml

4.执行安装harbor

[16:14:00 root@harbor harbor]#./install.sh

查看 harbor 已经拉起来了

2.2.2.3 访问验证

浏览器访问验证 https://10.0.0.105/

账户:admin

密码:123456

2.2.2.4 创建基础镜像项目

登陆到管理员页面创建一个baseimages的项目

2.2.2.5 测试登录harbor

1.在 master1 节点上安装 docker 用于测试 harbor 是否能够正常登录

1、解决依赖

[19:39:41 root@k8s-master1 ~]#sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common

2、安装 GPG 证书

[19:39:54 root@k8s-master1 ~]#curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -

3、写入docker – ce 软件源信息

[19:40:20 root@k8s-master1 ~]#sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

4、更新软件

[19:40:59 root@k8s-master1 ~]#apt-get -y update

5、安装 docker 5:19.03.9~3-0~ubuntu-bionic 版本

[19:41:31 root@k8s-master1 ~]#apt-get -y install docker-ce=5:19.03.9~3-0~ubuntu-bionic docker-ce-cli=5:19.03.9~3-0~ubuntu-bionic

6、启动docker 并设置为开机启动

[19:44:11 root@k8s-master1 ~]#systemctl start docker && systemctl enable docker

7.配置主机头

[16:49:08 root@k8s-master1 ~]#vim /etc/hosts
10.0.0.105 hub.zhangguiyuan.com

8.client 同步在crt证书

master1节点主机上创建一个 /etc/docker/certs.d/hub.zhangguiyuan.com 目录

[16:50:44 root@k8s-master1 ~]#mkdir /etc/docker/certs.d/hub.zhangguiyuan.com -p

9.回到 harbor 主机将 harbor-ca.crt 这个公钥文件拷贝到master主机 /etc/docker/certs.d/hub.zhangguiyuan.com

[17:06:04 root@harbor harbor]#cd certs/
[17:08:14 root@harbor certs]#scp harbor-ca.crt 10.0.0.101:/etc/docker/certs.d/hub.zhangguiyuan.com

10.重启 docker 验证能不能登陆 harbor

[16:51:58 root@k8s-master1 ~]#systemctl restart docker

# 测试登录成功
[17:09:16 root@k8s-master1 ~]#docker login hub.zhangguiyuan.com
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

2.2.2.6 推送镜像测试

1.添加 docker 镜像拉取加速

#添加 docker 镜像加速地址
[18:02:41 root@k8s-master-1 ~]#vim /etc/docker/daemon.json

{
  "registry-mirrors": ["https://hpqoo1ip.mirror.aliyuncs.com"]
}

#重新加载系统配置
[18:09:22 root@k8s-master-1 ~]#sudo systemctl daemon-reload 

#重启docker服务
[18:09:35 root@k8s-master-1 ~]#sudo systemctl restart docker

# 拉取 centos 镜像
[17:12:39 root@k8s-master1 ~]#docker pull centos

2.推送至本地仓库

[17:20:08 root@k8s-master1 ~]#docker tag centos:latest hub.zhangguiyuan.com/baseimage/centos:latest

[17:21:27 root@k8s-master1 ~]#docker push hub.zhangguiyuan.com/baseimage/centos

2.2.2.7 编写发布证书和变更主机头脚本

将 master2 主机和两个 node 主机上创建 /etc/docker/certs.d/hub.zhangguiyuan.com 目录。

[17:26:31 root@k8s-master1 ~]#vim scp.sh

#!/bin/bash
HOST="
10.0.0.102
10.0.0.109
10.0.0.110
"

for ip in ${HOST};do
          ssh root@${ip} "mkdir /etc/docker/certs.d/hub.zhangguiyuan.com -p"        
          ssh root@${ip} "echo 10.0.0.105 hub.zhangguiyuan.com >> /etc/hosts"                   scp /etc/docker/certs.d/hub.zhangguiyuan.com/harbor-ca.crt ${ip}:/etc/docker/certs.d/hub.zhangguiyuan.com/harbor-ca.crt      
done

在 node1 测试是否能够下载镜像

[17:41:27 root@k8s-node1 docker]#docker pull hub.zhangguiyuan.com/baseimage/centos
[17:41:37 root@k8s-node1 docker]#docker images 
REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
hub.zhangguiyuan.com/baseimage/centos   latest              5d0da3dc9764        8 days ago          231MB

2.2.3 安装haproxy和keepalived

在两台 haaproxy 主机上安装负载均衡服务和 VIP 高可用服务

# HA1 安装
[15:19:32 root@k8s-HA1 ~]#apt install keepalived haproxy -y

# HA2 安装
[15:19:32 root@k8s-HA2 ~]#apt install keepalived haproxy -y

2.2.4 安装python 2.7

需要在所有的K8S集群节点上安装python 2.7,但是除了haproxy的那两台台主机上不用安装。

1.安装 python

[17:44:27 root@k8s-master1 ~]#apt-get install python2.7 -y
[17:44:27 root@k8s-master2 ~]#apt-get install python2.7 -y
[17:44:18 root@harbor ~]#apt-get install python2.7 -y
[17:44:25 root@k8s-node1 ~]#apt-get install python2.7 -y
[17:44:26 root@k8s-node2 ~]#apt-get install python2.7 -y
[17:45:01 root@k8s-etcd1 ~]#apt-get install python2.7 -y
[17:45:01 root@k8s-etcd2 ~]#apt-get install python2.7 -y
[17:45:01 root@k8s-etcd3 ~]#apt-get install python2.7 -y

2.生成软连接,将所有安装了python的机器都生成软连接,到 /usr/bin/python

[17:44:27 root@k8s-master1 ~]#ln -s /usr/bin/python2.7 /usr/bin/python
[17:44:27 root@k8s-master2 ~]#ln -s /usr/bin/python2.7 /usr/bin/python
[17:44:18 root@harbor ~]#ln -s /usr/bin/python2.7 /usr/bin/python
[17:44:25 root@k8s-node1 ~]#ln -s /usr/bin/python2.7 /usr/bin/python
[17:44:26 root@k8s-node2 ~]#ln -s /usr/bin/python2.7 /usr/bin/python
[17:45:01 root@k8s-etcd1 ~]#ln -s /usr/bin/python2.7 /usr/bin/python
[17:45:01 root@k8s-etcd2 ~]#ln -s /usr/bin/python2.7 /usr/bin/python
[17:45:01 root@k8s-etcd3 ~]#ln -s /usr/bin/python2.7 /usr/bin/python

2.2.5 master 节点安装及准备ansible

这里是在master1上来安装部署。所以在 master1 机器上来安装ansible

1.安装 ansible

[17:46:00 root@k8s-master1 ~]#apt-get install ansible -y

2.设置 ansible 免密

[17:49:18 root@k8s-master1 ~]#ssh-keygen

3.分发公钥到其他节点

[17:49:25 root@k8s-master1 ~]#ssh-copy-id 10.0.0.101
[17:49:25 root@k8s-master1 ~]#ssh-copy-id 10.0.0.102
[17:49:25 root@k8s-master1 ~]#ssh-copy-id 10.0.0.103
[17:49:25 root@k8s-master1 ~]#ssh-copy-id 10.0.0.104
[17:49:25 root@k8s-master1 ~]#ssh-copy-id 10.0.0.105
[17:49:25 root@k8s-master1 ~]#ssh-copy-id 10.0.0.106
[17:49:25 root@k8s-master1 ~]#ssh-copy-id 10.0.0.107
[17:49:25 root@k8s-master1 ~]#ssh-copy-id 10.0.0.108
[17:49:25 root@k8s-master1 ~]#ssh-copy-id 10.0.0.109
[17:49:25 root@k8s-master1 ~]#ssh-copy-id 10.0.0.110

2.2.5.1 在ansible控制端编排k8s安装

  • 下载项目源码

  • 下载二进制文件

  • 下载离线docker镜像

推荐使用 easzup 脚本下载 4.0/4.1/4.2 所需文件;运行成功后,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/ansilbe

https://github.com/easzlab/kubeasz/releases/

1.下载 ezdown 工具

[17:59:09 root@k8s-master1 ~]#wget https://github.com/easzlab/kubeasz/releases/download/3.0.1/ezdown

[18:02:13 root@k8s-master1 ~]#chmod +x ezdown

2.修改 docker 版本和安装 K8S 版本

因为我们刚才安装的 docker 是 19.03.9 版本

[18:00:44 root@k8s-master1 ~]#vim ezdown 
DOCKER_VER=19.03.9
K8S_BIN_VER=v1.21.0

3.下载项目源码、二进制及离线镜像

[18:02:28 root@k8s-master1 ~]#./ezdown -D

2.2.5.2 创建集群配置实例

[18:09:17 root@k8s-master1 ~]#cd /etc/kubeasz/
[18:09:31 root@k8s-master1 kubeasz]#./ezctl new k8s-fx01
2021-09-24 18:10:17 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-fx01
2021-09-24 18:10:17 DEBUG set version of common plugins
2021-09-24 18:10:17 DEBUG cluster k8s-fx01: files successfully created.
2021-09-24 18:10:17 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-fx01/hosts'
2021-09-24 18:10:17 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-fx01/config.yml'

2.2.5.3 修改集群配置文件

[18:17:49 root@k8s-master1 kubeasz]#vim clusters/k8s-fx01/hosts 

# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
10.0.0.106 NODE_NAME=etcd1
10.0.0.107 NODE_NAME=etcd2
10.0.0.108 NODE_NAME=etcd3

# master node(s)
[kube_master]
10.0.0.101
10.0.0.102

# work node(s)
[kube_node]
10.0.0.109
10.0.0.110

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="172.30.0.0/16"

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.20.0.0/16

bin_dir="/usr/local/bin"

[18:24:51 root@k8s-master1 kubeasz]#vim playbooks/01.prepare.yml

image-20210924182700620

2.2.6 部署阶段

2.2.6.1 01-创建证书和环境准备

本步骤主要完成:

  • (optional) role:os-harden,可选系统加固,符合linux安全基线,详见upstream

  • (optional) role:chrony,可选集群节点时间同步

  • role:deploy,创建CA证书、集群组件访问apiserver所需的各种kubeconfig

  • role:prepare,系统基础环境配置、分发CA证书、kubectl客户端安装

[18:30:18 root@k8s-master1 kubeasz]#./ezctl setup k8s-fx01 01

2.2.6.2 02-安装etcd集群

[18:30:51 root@k8s-master1 kubeasz]#./ezctl setup k8s-fx01 02

到 etcd 节点查看安装完成后状态是否正常

[17:46:20 root@k8s-etcd1 ~]#for i in `seq 107 108`;do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints=https://10.0.0.${i}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health;done

# 提交成功
https://10.0.0.107:2379 is healthy: successfully committed proposal: took = 9.108698ms
https://10.0.0.108:2379 is healthy: successfully committed proposal: took = 9.89495ms

2.2.6.3 03-安装容器运行时(docker)

[18:33:43 root@k8s-master1 kubeasz]#./ezctl setup k8s-fx01 03、

2.2.6.4 04-安装kube_master节点

[18:33:56 root@k8s-master1 kubeasz]#./ezctl setup k8s-fx01 04

2.2.6.5 05-安装kube_node节点

kube_node 是集群中运行工作负载的节点,前置条件需要先部署好kube_master节点,它需要部署如下组件:

  • kubelet: kube_node上最主要的组件

  • kube-proxy: 发布应用服务与负载均衡

  • haproxy:用于请求转发到多个 apiserver,详见HA-2x 架构

  • calico: 配置容器网络 (或者其他网络组件)

1.修改配置文件,增加kube-proxy 代理模式ipvs

[18:33:56 root@k8s-master1 kubeasz]#vim /etc/kubeasz/roles/kube-node/templates/kube-proxy-config.yaml.j2
...
mode: "{{ PROXY_MODE }}"
ipvs:
  scheduler: rr

2.部署 node 节点

[18:45:41 root@k8s-master1 kubeasz]#./ezctl setup k8s-fx01 05

3.查看 node 节点状态

[18:48:11 root@k8s-master1 kubeasz]#kubectl get node
NAME         STATUS   ROLES   AGE   VERSION
10.0.0.101   Ready    node    13m   v1.21.0
10.0.0.102   Ready    node    13m   v1.21.0
10.0.0.109   Ready    node    28s   v1.21.0
10.0.0.110   Ready    node    28s   v1.21.0

这时候我们去观察node节点的/etc/haproxy/haproxy.cfg文件就会发现他自动给我们生成了负载均衡的规则

[18:49:55 root@k8s-node2 ~]#vim /etc/haproxy/haproxy.cfg

global
        log /dev/log    local1 warning
        chroot /var/lib/haproxy
        user haproxy
        group haproxy
        daemon
        nbproc 1

defaults
        log     global
        timeout connect 5s
        timeout client  10m
        timeout server  10m

listen kube_master
        bind 127.0.0.1:6443
        mode tcp
        option tcplog
        option dontlognull
        option dontlog-normal
        balance roundrobin
        server 10.0.0.101 10.0.0.101:6443 check inter 10s fall 2 rise 2 weight 1
        server 10.0.0.102 10.0.0.102:6443 check inter 10s fall 2 rise 2 weight 1

2.2.6.6 06-安装网络组件

查看他的calico版本是 3.15.3 的所以我们要GitHub上下载 3.15.3 的calico

[19:02:30 root@k8s-master1 kubeasz]#cat clusters/k8s-fx01/config.yml | grep calico_ver:
calico_ver: "v3.15.3"

https://github.com/projectcalico/calico/releases/tag/v3.15.3

[19:06:22 root@k8s-master1 kubeasz]#cd /opt/
[19:06:32 root@k8s-master1 opt]#wget https://github.com/projectcalico/calico/releases/download/v3.15.3/release-v3.15.3.tgz
[19:13:16 root@k8s-master1 opt]#tar xf release-v3.15.3.tgz

解压完之后他会生成4个文件,

  • calico-node.tar是我们的node节点的容器

  • calico-cni.tar是用来封装容器

  • calico-kube-controllers.tar在node节点上用来启动容器的,用于控制的

[19:17:00 root@k8s-master1 opt]#cd release-v3.15.3/images/

# 将镜像导入至 docker 中
[19:17:05 root@k8s-master1 images]#docker load -i calico-cni.tar
Loaded image: calico/cni:v3.15.3
[19:17:19 root@k8s-master1 images]#docker load -i calico-node.tar
Loaded image: calico/node:v3.15.3
[19:17:40 root@k8s-master1 images]#docker load -i calico-kube-controllers.tar

2.添加到 docker 中的镜像上传至 harbor 中

[19:20:22 root@k8s-master1 images]#docker tag calico/node:v3.15.3 hub.zhangguiyuan.com/baseimage/calico/node:v3.15.3
[19:21:13 root@k8s-master1 images]#docker tag calico/pod2daemon-flexvol:v3.15.3 hub.zhangguiyuan.com/baseimage/calico/pod2daemon-flexvol:v3.15.3
[19:22:03 root@k8s-master1 images]#docker tag calico/cni:v3.15.3 hub.zhangguiyuan.com/baseimage/calico/cni:v3.15.3
[19:22:16 root@k8s-master1 images]#docker tag calico/kube-controllers:v3.15.3 hub.zhangguiyuan.com/baseimage/calico/kube-controllers:v3.15.3

# 上传至 harbor 中
[19:23:08 root@k8s-master1 images]#docker push hub.zhangguiyuan.com/baseimage/calico/node:v3.15.3 
[19:23:43 root@k8s-master1 images]#docker push hub.zhangguiyuan.com/baseimage/calico/pod2daemon-flexvol:v3.15.3
[19:24:00 root@k8s-master1 images]#docker push hub.zhangguiyuan.com/baseimage/calico/cni:v3.15.3 
[19:24:21 root@k8s-master1 images]#docker push hub.zhangguiyuan.com/baseimage/calico/kube-controllers:v3.15.3

3.修改配置文件,将镜像地址替换为本地内网harbor地址,修改后如下:

[19:25:16 root@k8s-master1 images]#cd /etc/kubeasz/
[19:27:23 root@k8s-master1 kubeasz]#grep image roles/calico/templates/calico-v3.15.yaml.j2 -n
212:          image: hub.zhangguiyuan.com/baseimage/calico/cni:v3.15.3
251:          image: hub.zhangguiyuan.com/baseimage/calico/pod2daemon-flexvol:v3.15.3
262:          image: hub.zhangguiyuan.com/baseimage/calico/node:v3.15.3
488:          image: hub.zhangguiyuan.com/baseimage/calico/kube-controllers:v3.15.3

4.安装网络插件

[19:27:29 root@k8s-master1 kubeasz]#./ezctl setup k8s-fx01 06

5.安装完成后验证

[11:33:33 root@k8s-master1exit ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 10.0.0.110   | node-to-node mesh | up    | 03:28:18 | Established |
| 10.0.0.102   | node-to-node mesh | up    | 03:28:19 | Established |
| 10.0.0.109   | node-to-node mesh | up    | 03:28:22 | Established |
+--------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

2.2.6.7 安装coredns

1.手动下载coredns镜像

[11:36:12 root@k8s-master1exit ~]#docker pull coredns/coredns:1.8.3

2.再把官方镜像打tag,上传到本地镜像harbor

[11:38:44 root@k8s-master1exit ~]#docker tag coredns/coredns:1.8.3 hub.zhangguiyuan.com/baseimage/coredns/coredns:1.8.3

[11:39:03 root@k8s-master1exit ~]#docker push hub.zhangguiyuan.com/baseimage/coredns/coredns:1.8.3

3.准备coredns.yml文件

[11:39:47 root@k8s-master1exit ~]# wget https://dl.k8s.io/v1.21.4/kubernetes.tar.gz

[11:40:23 root@k8s-master1exit ~]#tar xf kubernetes.tar.gz 

[11:40:38 root@k8s-master1exit ~]#cd kubernetes/cluster/addons/dns/coredns

[11:40:44 root@k8s-master1exit coredns]#cp coredns.yaml.base coredns.yaml

4.修改coredns.yml文件以下配置:

[12:50:39 root@k8s-master1exit coredns]#vim coredns.yaml
#coredns的yaml文件
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
  - apiGroups:
    - discovery.k8s.io
    resources:
    - endpointslices
    verbs:
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        bind 0.0.0.0
        ready
        #DNS_DOMAIN为 /etc/kubeasz/clusters/k8s-fx01/hosts 配置中的CLUSTER_DNS_DOMAIN
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        #/etc/resolv.conf可以改为公司或者其它的DNS地址
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        #image: coredns/coredns:1.8.3
        #从harbor仓库拉取镜像文件
        image: hub.zhangguiyuan.com/baseimage/coredns/coredns:1.8.3
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  type: NodePort
  selector:
    k8s-app: kube-dns
  clusterIP: 172.30.0.2  #为server CIDR的第二个ip地址
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
    targetPort: 9153
    nodePort: 30009

5.验证 coredns 已经部署成功

[12:52:51 root@k8s-master1exit coredns]#kubectl get pod -A | grep coredns
kube-system   coredns-654d4fbb5b-ctzfn                   1/1     Running   0          2m46s
暂无评论

发送评论 编辑评论


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇
下一篇