K8S 实战系列:zookeeper 集群

1 K8S 实战案例之 zookeeper 集群

个人项目地址:https://github.com/As9530272755/Zookeeper-cluster-of-k8s

基于 PV 和 PVC 作为后端存储,实现 zookeeper 集群

这里我采用的是 zookeeper 的 3.4.14 版本进行安装

zookeeper 我们都知道在公司中主要是用于 Kafka 的集群维护,因为 kafka 本身是没有办法实现集群的高可用的,这种服务一般都是依赖于 zookeeper 来使用的,像微服务的注册发现也都需要 zookeeper

但是 zookeeper 需要 java 环境,所以需要我们自己先打镜像

zookeeper 官网: https://zookeeper.apache.org/

zookeeper 的运行环境需要使用 jdk 1.28 或以上版本

点击下载

下载链接:https://archive.apache.org/dist/zookeeper/

点击下载

1.1 zookeeper 架构原理

如果是单个 zookeeper 还好不会涉及到什么集群配置,因为我们都知道 zookeeper 的集群配置会涉及到三行配置文件,因为在 K8S 我们都是采用 svc 来定义,因为 svc 是不会发生变化,所以 K8S 中的 zookeeper 集群跑起来之后结构如下图

zk1、zk2、zk3 之间他们需要集群的选举包括数据同步,那他们之间怎么通讯呢?当然他们之间都是通过寻找 svc 的方式进行通讯,zk1-svc 我们会定义好 label 只绑定 zk1,另外两个 zk2 和 zk3 依次类推后端都绑定了我们对应的 pod,每个 zk pod 之间都是通过 svc 链接,通过 svc 找到其他的 zk ,然后再通过 svc 在转给对应的 容器

我在配置的时候他的每个 server.id 在同一个集群中是唯一的,因为这个会涉及到 zookeeper 的角色选举

当然数据是在 K8S 以外保存,因为 zookeeper 中肯定有一些业务数据,也包括一些微服务的注册地址等等,肯定不能放在容器里一旦容器重建的话这些数据就丢了,所以外面的话一定是有一些商业存储或者说一些分布式存储,能够把 zookeeper 里面的数据放到 K8S 以外进行存储,我们可以直接在 NAS 存储上创建三个目录,然后再挂载 ZK1、ZK2、ZK3 上,但是这里我采用的是 pv 、pvc 的方式,每个 zk 的 pod 都是有一个单独的 pvc 的,所以需要三个 pvc 这三个 pvc 的挂载级别都是读写,然后 pvc 在把数据写到 pv 上当然我们需要三个 pvc 和三个 pv,然后再通过 pv 把数据持久化到存储就够了,好处就是 zk 的 pod 掉了我们的数据也不会丢失,即使 zookeeper 的 zk1 leader 挂了,zk2 和 zk3 会发生选举,这个时候 zk2 和 zk3 会直接优先对比事务 id ,谁的事务 id 新谁就会成为新的 leader,此时如果 zk1 pod 重建被拉起来就会向新的 leader 同步全量数据

zookeeper 的选举分为以下几个步骤:

  1. 第一次将集群启动 zookeeper 会优先对比每个 server 之间的事务 id ,但是这个时候没有数据所以他的事物 id 是一样的

  2. 既然通过事务 id 的方式选举不出来那第二次就通过 server id,谁的 id 大谁就是 leader

最后就会生成以下三行配置:

server.1=zk1-service:2888:3888
server.2=zk2-service:2888:3888
server.3=zk3-service:2888:3888

# 2888 集群内机器通讯使用(Leader监听此端口)
# 3888 做选举时候监听

1.2 安装 harbor

因为我们这里模拟的是本地私有环境的一个安装方式,所以需要部署一个 harbor

1.2.1 安装 docker

1、解决依赖

[19:39:41 root@harbor ~]#sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common

2、安装 GPG 证书

[19:39:54 root@harbor ~]#curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -

3、写入docker – ce 软件源信息

[19:40:20 root@harbor ~]#sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

4、更新软件

[19:40:59 root@harbor ~]#apt-get -y update

5、安装 docker 5:19.03.9~3-0~ubuntu-bionic 版本

[19:41:31 root@harbor ~]#apt-get -y install docker-ce=5:19.03.9~3-0~ubuntu-bionic docker-ce-cli=5:19.03.9~3-0~ubuntu-bionic

6、启动docker 并设置为开机启动

[19:44:11 root@harbor ~]#systemctl start docker && systemctl enable docker

1.2.2 搭建 harbor 部署

部署docker证书,让我们的各个节点都能够到harbor上去拉取下载镜像

安装 docker-compose

[15:52:21 root@harbor ~]#wget https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64

[15:54:34 root@harbor ~]#mv docker-compose-Linux-x86_64 /usr/local/bin/docker-compose

[15:54:38 root@harbor ~]#chmod a+x /usr/local/bin/docker-compose

[15:54:41 root@harbor ~]#docker-compose --version
docker-compose version 1.29.2, build 5becea4c

1.解压 harbor

# 在 harbor 节点上操作

[15:22:41 root@harbor ~]#cd /usr/local/src/

[15:28:11 root@harbor src]#wget https://github.com/goharbor/harbor/releases/download/v2.1.6/harbor-offline-installer-v2.1.6.tgz

[15:29:16 root@harbor src]#tar xf harbor-offline-installer-v2.1.6.tgz 

2.签发证书

[15:29:27 root@harbor src]#cd harbor/

# 创建一个certs证书存储文件
[17:02:46 root@harbor harbor]#mkdir certs/

# 生成私钥key
[17:02:51 root@harbor harbor]#openssl genrsa -out /usr/local/src/harbor/certs/harbor-ca.key

# 签发证书
[17:02:55 root@harbor harbor]#openssl req -x509 -new -nodes -key /usr/local/src/harbor/certs/harbor-ca.key -subj "/CN=hub.zhangguiyuan.com" -days 7120 -out /usr/local/src/harbor/certs/harbor-ca.crt

3.修改配置

[16:05:16 root@harbor harbor]#vim harbor.yml.tmpl
  hostname: hub.zhangguiyuan.com
  certificate: /usr/local/src/harbor/certs/harbor-ca.crt
  private_key: /usr/local/src/harbor/certs/harbor-ca.key
  harbor_admin_password: 123456

# 拷贝为 harbor.yml 文件
[16:15:29 root@harbor harbor]#cp harbor.yml.tmpl harbor.yml

4.执行安装harbor

[16:14:00 root@harbor harbor]#./install.sh

查看 harbor 已经拉起来了

1.2.3 访问验证

浏览器访问验证 https://10.0.0.103/

账户:admin

密码:123456

登陆到管理员页面创建一个 baseimages 的项目项目

1.2.4 master 测试登录harbor

1.置主机头

[15:45:48 root@master ]#vim /etc/hosts
10.0.0.103 hub.zhangguiyuan.com

2.client 同步在crt证书

master 节点主机上创建一个 /etc/docker/certs.d/hub.zhangguiyuan.com 目录

[15:47:17 root@master ~]#mkdir /etc/docker/certs.d/hub.zhangguiyuan.com -p

3.回到 harbor 主机将 harbor-ca.crt 这个公钥文件拷贝到master主机 /etc/docker/certs.d/hub.zhangguiyuan.com

[17:06:04 root@harbor harbor]#cd certs/
[15:47:49 root@harbor certs]#scp harbor-ca.crt 10.0.0.100:/etc/docker/certs.d/hub.zhangguiyuan.com

4.重启 docker 验证能不能登陆 harbor

[15:47:43 root@master ~]#systemctl restart docker

# 测试登录成功
[15:49:01 root@master ~]#docker login hub.zhangguiyuan.com
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

5.验证是否能够上传镜像至本地仓库

[15:52:08 root@master ~]#docker pull centos
[15:53:14 root@master ~]#docker tag centos:latest hub.zhangguiyuan.com/baseimage/centos:latest

# 推送成功
[15:53:21 root@master ~]#docker push hub.zhangguiyuan.com/baseimage/centos

1.2.5 发布证书和变更主机头

在另外两个 node 主机上创建 /etc/docker/certs.d/hub.zhangguiyuan.com 目录,实现镜像仓库的登录

1.编写脚本

[15:59:36 root@master ~]#vim scp.sh 

#!/bin/bash
HOST="
10.0.0.101
10.0.0.102
"

for ip in ${HOST};do
          ssh root@${ip} "mkdir /etc/docker/certs.d/hub.zhangguiyuan.com -p"
          ssh root@${ip} "echo 10.0.0.103 hub.zhangguiyuan.com >> /etc/hosts"
scp /etc/docker/certs.d/hub.zhangguiyuan.com/harbor-ca.crt ${ip}:/etc/docker/certs.d/hub.zhangguiyuan.com/harbor-ca.crt
done

2.执行脚本

[15:51:55 root@master ~]#bash scp.sh 

3.在 node 节点上拉取 centos 镜像成功

[15:59:40 root@node-1 ~]#docker pull hub.zhangguiyuan.com/baseimage/centos

1.3 下载 JDK 镜像

# 下载 jdk 
[15:00:04 root@master daemonset]#docker pull elevy/slim_java:8

# 打 tag
[16:01:07 root@master ~]#docker tag elevy/slim_java:8 hub.zhangguiyuan.com/baseimage/elevy/slim_java:8

# 推送镜像至私有仓库
[16:01:29 root@master ~]#docker push hub.zhangguiyuan.com/baseimage/elevy/slim_java:8

1.4 Dockerfile 打包镜像

jdk 镜像准备好了之后就需要准备我们的业务镜像

1.当前目录结构

[18:43:53 root@master zookeeper]#tree 
.
├── bin
│   └── zkReady.sh
├── build-command.sh
├── conf
│   ├── log4j.properties
│   └── zoo.cfg
├── Dockerfile
├── entrypoint.sh
├── k8s
│   ├── pv
│   │   ├── zookeeperNS.yaml
│   │   ├── zookeeper-persistentvolumeclaim.yaml
│   │   └── zookeeper-persistentvolume.yaml
│   └── zookeeper.yaml
├── KEYS
├── repositories
├── zookeeper-3.12-Dockerfile.tar.gz
├── zookeeper-3.4.14.tar.gz
└── zookeeper-3.4.14.tar.gz.as

# bin/                  zookeeper 启动程序文件
# build-command.sh      构建镜像脚本
# conf/                 zookeeper 配置文件
# Dockerfile            dockerfile 文件
# entrypoint.sh         动态追加 server id 脚本
# KEYS                  keys 验证文件
# repositories          镜像源文件
# k8s/                  存放的是 pv 和 pvc 还有 zookeeper.yaml 文件

2.编写 entrypoint.sh

[17:54:10 root@master zookeeper]#vim entrypoint.sh 

#!/bin/bash
  
echo ${MYID:-1} > /zookeeper/data/myid

if [ -n "$SERVERS" ]; then
        IFS=\, read -a servers <<<"$SERVERS"
        for i in "${!servers[@]}"; do
                printf "\nserver.%i=%s:2888:3888" "$((1 + $i))" "${servers[$i]}" >> /zookeeper/conf/zoo.cfg
        done
fi

cd /zookeeper
exec "$@"

# 这个脚本的意思是获取我的 server id
# 然后输出重定向到 /zookeeper/conf/zoo.cfg 文件,这个文件会动态追叫几行配置

3.编写 zookeeper 配置文件

[19:17:45 root@master zookeeper]#cat conf/zoo.cfg 
tickTime=2000               # CS通信心跳时间,系统默认是2000毫秒,间隔两秒心跳一次
initLimit=10                # 集群中的follower服务器(F)与leader服务器(L)之间初始连接时能容忍的最多心跳数,tickTime的数量
syncLimit=5                 # 集群中flower服务器(F)跟leader(L)服务器之间的请求和答应最多能容忍的心跳数。   
dataDir=/zookeeper/data     # 存放myid信息跟一些版本,日志,跟服务器唯一的ID信息等。这个目录等会在创建 zookeeper deployment 的时候指定
dataLogDir=/zookeeper/wal   # 日志目录
#snapCount=100000
autopurge.purgeInterval=1   # ZK提供了自动清理事务日志和快照文件的功能,这个参数指定了清理频率,单位是小时,需要配置一个1或更大的整数,默认是0,表示不开启自动清理功能
clientPort=2181             # 客户端连接的接口,客户端连接zookeeper服务器的端口,zookeeper会监听这个端口,接收客户端的请求访问!这个端口默认是2181。
quorumListenOnAllIPs=true   # 该参数设置为true,Zookeeper服务器将监听所有可用IP地址的连接。他会影响ZAB协议和快速Leader选举协议。默认是false。

4.编写我们的 docker 文件

[17:20:45 root@master zookeeper]#vim Dockerfile 

#FROM harbor-linux38.local.com/linux38/slim_java:8 
FROM hub.zhangguiyuan.com/baseimage/elevy/slim_java:8 # 刚才上传到私有仓库的基本镜像

ENV ZK_VERSION 3.4.14
ADD repositories /etc/apk/repositories              # 替换仓库
# Download Zookeeper
COPY zookeeper-3.4.14.tar.gz /tmp/zk.tgz            # 拷贝安装包
COPY zookeeper-3.4.14.tar.gz.asc /tmp/zk.tgz.asc
COPY KEYS /tmp/KEYS                                 # 拷贝 KEYS 
RUN apk add --no-cache --virtual .build-deps \      # 安装依赖包
      ca-certificates   \
      gnupg             \
      tar               \
      wget &&           \
    #
    # Install dependencies
    apk add --no-cache  \
      bash &&           \
    #
    #
    # Verify the signature
    export GNUPGHOME="$(mktemp -d)" && \
    gpg -q --batch --import /tmp/KEYS && \
    gpg -q --batch --no-auto-key-retrieve --verify /tmp/zk.tgz.asc /tmp/zk.tgz && \
    #
    # Set up directories
    #
    mkdir -p /zookeeper/data /zookeeper/wal /zookeeper/log && \
    #
    # Install
    tar -x -C /zookeeper --strip-components=1 --no-same-owner -f /tmp/zk.tgz && \
    #
    # Slim down
    cd /zookeeper && \
    cp dist-maven/zookeeper-${ZK_VERSION}.jar . && \
    rm -rf \
      *.txt \
      *.xml \
      bin/README.txt \
      bin/*.cmd \
      conf/* \
      contrib \
      dist-maven \
      docs \
      lib/*.txt \
      lib/cobertura \
      lib/jdiff \
      recipes \
      src \
      zookeeper-*.asc \
      zookeeper-*.md5 \
      zookeeper-*.sha1 && \
    #
    # Clean up
    apk del .build-deps && \
    rm -rf /tmp/* "$GNUPGHOME"

COPY conf /zookeeper/conf/                          # 拷贝配置文件
COPY bin/zkReady.sh /zookeeper/bin/                 # 拷贝 zookeeper 脚本
COPY entrypoint.sh /                                # 拷贝 entrypoint.sh

ENV PATH=/zookeeper/bin:${PATH} \
    ZOO_LOG_DIR=/zookeeper/log \
    ZOO_LOG4J_PROP="INFO, CONSOLE, ROLLINGFILE" \
    JMXPORT=9010

ENTRYPOINT [ "/entrypoint.sh" ]                     # 执行脚本
            
CMD [ "zkServer.sh", "start-foreground" ]           # 前台运行

EXPOSE 2181 2888 3888 9010

5.编写构建镜像脚本

[18:02:59 root@master zookeeper]#vim build-command.sh 

#!/bin/bash
TAG=$1
docker build -t hub.zhangguiyuan.com/baseimage/zookeeper:${TAG} .
sleep 1
docker push  hub.zhangguiyuan.com/baseimage/zookeeper:${TAG}

# TAG 为我们执行脚本时所需传入的变量参数

6.添加执行权限

[18:09:33 root@master zookeeper]#chmod +x *.sh

7.执行构建镜像脚本

[18:09:33 root@master zookeeper]#bash build-command.sh testzk11-20211014_1314

# build-command.sh 脚本后面跟的参数时随便写的 8 位 tag 然后跟上我们的当前时间,当然这个可以自定义

7.通过 docker 运行刚才构建的镜像

[18:10:53 root@master zookeeper]#docker run -it --rm hub.zhangguiyuan.com/baseimage/zookeeper:testzk11-20211014_1314
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
2021-10-14 10:10:56,626 [myid:] - INFO  [main:QuorumPeerConfig@136] - Reading configuration from: /zookeeper/bin/../conf/zoo.cfg
2021-10-14 10:10:56,631 [myid:] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2021-10-14 10:10:56,631 [myid:] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
2021-10-14 10:10:56,632 [myid:] - WARN  [main:QuorumPeerMain@116] - Either no config or no quorum defined in config, running  in standalone mode
2021-10-14 10:10:56,634 [myid:] - INFO  [main:QuorumPeerConfig@136] - Reading configuration from: /zookeeper/bin/../conf/zoo.cfg
2021-10-14 10:10:56,635 [myid:] - INFO  [main:ZooKeeperServerMain@98] - Starting server
2021-10-14 10:10:56,639 [myid:] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
2021-10-14 10:10:56,647 [myid:] - INFO  [main:Environment@100] - Server environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
2021-10-14 10:10:56,649 [myid:] - INFO  [main:Environment@100] - Server environment:host.name=43669ea04884
2021-10-14 10:10:56,651 [myid:] - INFO  [main:Environment@100] - Server environment:java.version=1.8.0_144
2021-10-14 10:10:56,651 [myid:] - INFO  [main:Environment@100] - Server environment:java.vendor=Oracle Corporation
2021-10-14 10:10:56,651 [myid:] - INFO  [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-8-oracle
2021-10-14 10:10:56,654 [myid:] - INFO  [main:Environment@100] - Server environment:java.class.path=/zookeeper/bin/../zookeeper-server/target/classes:/zookeeper/bin/../build/classes:/zookeeper/bin/../zookeeper-server/target/lib/*.jar:/zookeeper/bin/../build/lib/*.jar:/zookeeper/bin/../lib/slf4j-log4j12-1.7.25.jar:/zookeeper/bin/../lib/slf4j-api-1.7.25.jar:/zookeeper/bin/../lib/netty-3.10.6.Final.jar:/zookeeper/bin/../lib/log4j-1.2.17.jar:/zookeeper/bin/../lib/jline-0.9.94.jar:/zookeeper/bin/../lib/audience-annotations-0.5.0.jar:/zookeeper/bin/../zookeeper-3.4.14.jar:/zookeeper/bin/../zookeeper-server/src/main/resources/lib/*.jar:/zookeeper/bin/../conf:
2021-10-14 10:10:56,654 [myid:] - INFO  [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2021-10-14 10:10:56,655 [myid:] - INFO  [main:Environment@100] - Server environment:java.io.tmpdir=/tmp
2021-10-14 10:10:56,655 [myid:] - INFO  [main:Environment@100] - Server environment:java.compiler=
2021-10-14 10:10:56,657 [myid:] - INFO  [main:Environment@100] - Server environment:os.name=Linux
2021-10-14 10:10:56,657 [myid:] - INFO  [main:Environment@100] - Server environment:os.arch=amd64
2021-10-14 10:10:56,657 [myid:] - INFO  [main:Environment@100] - Server environment:os.version=4.15.0-112-generic
2021-10-14 10:10:56,657 [myid:] - INFO  [main:Environment@100] - Server environment:user.name=root
2021-10-14 10:10:56,657 [myid:] - INFO  [main:Environment@100] - Server environment:user.home=/root
2021-10-14 10:10:56,657 [myid:] - INFO  [main:Environment@100] - Server environment:user.dir=/zookeeper
2021-10-14 10:10:56,659 [myid:] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
2021-10-14 10:10:56,662 [myid:] - INFO  [main:ZooKeeperServer@836] - tickTime set to 2000
2021-10-14 10:10:56,662 [myid:] - INFO  [main:ZooKeeperServer@845] - minSessionTimeout set to -1
2021-10-14 10:10:56,663 [myid:] - INFO  [main:ZooKeeperServer@854] - maxSessionTimeout set to -1
2021-10-14 10:10:56,670 [myid:] - INFO  [main:ServerCnxnFactory@117] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory
2021-10-14 10:10:56,675 [myid:] - INFO  [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181

我们看到该镜像已经能够运行,这时候我们就可以通过 K8S 来启动了

1.5 创建 PV PVC

当然在 K8S 中需要 pv 和 pvc 来给 zookeeper 提供存储

1.5.1 安装准备 nfs

1.在 NFS 服务器上创建三个存储目录

[18:21:57 root@harbor_nfs ~]#mkdir -p /data/k8sdata/zookeeper/zookeeper-datadir-1
[18:22:00 root@harbor_nfs ~]#mkdir -p /data/k8sdata/zookeeper/zookeeper-datadir-2
[18:22:01 root@harbor_nfs ~]#mkdir -p /data/k8sdata/zookeeper/zookeeper-datadir-3

2.安装 nfs

[18:23:26 root@harbor_nfs ~]#apt-get install nfs-kernel-server -y

3.配置 nfs 同步数据规则

[18:25:22 root@harbor_nfs ~]#vim /etc/exports 
/data/k8sdata *(rw,no_root_squash,no_all_squash,sync)

4.重启 nfs ,并设为开机启动

[18:28:02 root@harbor_nfs ~]#systemctl restart nfs-server.service 
[18:28:11 root@harbor_nfs ~]#systemctl enable nfs-server.service

5.所有 K8S 节点安装 NFS 客户端

# 安装 nfs
sudo apt install nfs-common -y

# 设置开机启动
systemctl enable --now nfs-common

6.验证 nfs 客户端是否能够共享 NFS 服务器

[18:29:53 root@master pv]#showmount -e 10.0.0.103
Export list for 10.0.0.103:
/data/k8sdata *

1.5.2 创建 pv

pv 的话需要三个,因为我们有三个 zk 服务

1.编写 pv

[18:32:26 root@master pv]#pwd
/root/zookeeper/k8s/pv
[18:14:22 root@master pv]#vim zookeeper-persistentvolume.yaml 

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-1
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce                             # 单个 pod 读写
  nfs:                                          # 存储类型 nfs
    server: 10.0.0.103                          # NFS 服务器地址
    path: /data/k8sdata/zookeeper/zookeeper-datadir-1

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-2
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 10.0.0.103
    path: /data/k8sdata/zookeeper/zookeeper-datadir-2

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-3
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 10.0.0.103
    path: /data/k8sdata/zookeeper/zookeeper-datadir-3

2.创建 pv

[18:32:55 root@master pv]#kubectl apply -f zookeeper-persistentvolume.yaml 
persistentvolume/zookeeper-datadir-pv-1 created
persistentvolume/zookeeper-datadir-pv-2 created
persistentvolume/zookeeper-datadir-pv-3 created

# 查看当前 pv 已经创建,状态为 Available
[18:32:59 root@master pv]#kubectl get pv
NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
zookeeper-datadir-pv-1   20Gi       RWO            Retain           Available                                   25s
zookeeper-datadir-pv-2   20Gi       RWO            Retain           Available                                   25s
zookeeper-datadir-pv-3   20Gi       RWO            Retain           Available                                   25s

1.5.3 创建 pvc

我们都知道创建 pvc 需要创建在 namespace 中,所以这里我提前创建一个 zookeeper 的 namespace

1.编写 namespace 并创建

# 编写
[18:37:10 root@master pv]#vim zookeeperNS.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: zookeeper
 
# 创建
[18:37:08 root@master pv]#kubectl apply -f zookeeperNS.yaml 

2.编写 pvc

[18:33:52 root@master pv]#vim zookeeper-persistentvolumeclaim.yaml 

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-1             # pvc 名称
  namespace: zookeeper                      # 指定 namespace
spec:
  accessModes:                              # 访问权限
    - ReadWriteOnce                         # 单个 pod 读写
  volumeName: zookeeper-datadir-pv-1        # 绑定的 pv 名称
  resources:
    requests:                               # 请求资源
      storage: 10Gi                         # 需要 10G 存储
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-2
  namespace: zookeeper
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-2
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-3
  namespace: zookeeper
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-3
  resources:
    requests:
      storage: 10Gi

# 创建 pvc
[18:40:19 root@master pv]#kubectl apply -f zookeeper-persistentvolumeclaim.yaml 
persistentvolumeclaim/zookeeper-datadir-pvc-1 created
persistentvolumeclaim/zookeeper-datadir-pvc-2 created
persistentvolumeclaim/zookeeper-datadir-pvc-3 created

# 查看 pvc 状态为 bound ,但是我们可以看到 STORAGECLASS 存储类型现在还没绑定
[18:40:57 root@master pv]#kubectl get pvc -n zookeeper 
NAME                      STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
zookeeper-datadir-pvc-1   Bound    zookeeper-datadir-pv-1   20Gi       RWO                           25s
zookeeper-datadir-pvc-2   Bound    zookeeper-datadir-pv-2   20Gi       RWO                           25s
zookeeper-datadir-pvc-3   Bound    zookeeper-datadir-pv-3   20Gi       RWO                           25s

1.5.4 创建 zookeeper Deployment

1.路径信息

[18:48:11 root@master k8s]#pwd
/root/zookeeper/k8s

2.编写 zookeeper yaml 文件

# 这个 svc 是给 K8S 集群来使用的
apiVersion: v1
kind: Service
metadata:
  name: zookeeper
  namespace: zookeeper
spec:
  ports:
    - name: client
      port: 2181
  selector:
    app: zookeeper
---
# 这个 svc 是通过 nodeport 的方式给 K8S 集群外的服务来访问
apiVersion: v1
kind: Service
metadata:
  name: zookeeper1
  namespace: zookeeper
spec:
  type: NodePort                    # svc 类型为 nodeport      
  ports:                            # 端口信息
    - name: client
      port: 2181                    # pod 内端口
      nodePort: 32181               # 绑定在 node 上的端口
    - name: followers               # zookeeper 集群内通讯使用(Leader监听此端口)
      port: 2888
    - name: election                # zookeeper 集群内部选举端口 3888
      port: 3888
  selector:                         # SVC 标签选择器
    app: zookeeper                  # 匹配 app=zookeeper  的 pod 
    server-id: "1"                  # 匹配 server-id= "1" 的 pod,用来选举 leader
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper2
  namespace: zookeeper
spec:
  type: NodePort        
  ports:
    - name: client
      port: 2181
      nodePort: 32182
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:                         # SVC 标签选择器
    app: zookeeper                  
    server-id: "2"                  # 匹配 server-id= "2" 的 pod,用来选举 leader
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper3
  namespace: zookeeper
spec:
  type: NodePort        
  ports:
    - name: client
      port: 2181
      nodePort: 32183
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "3"                  # 匹配 server-id= "3" 的 pod 
---
# zookeeper Deployment 信息
kind: Deployment
apiVersion: apps/v1
metadata:
  name: zookeeper1
  namespace: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper                        # 定义 app=zookeeper 标签
  template:
    metadata:
      labels:
        app: zookeeper                      # 定义 pod 标配 app=zookeeper 给 Deployment 匹配
        server-id: "1"                      # 定义 server-id="1" 标签给 svc 匹配
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: hub.zhangguiyuan.com/baseimage/zookeeper:testzk11-20211014_1314
          imagePullPolicy: Always
          env:
            - name: MYID
              value: "1"
            - name: SERVERS # 传入变量 servers 对应 entrypoint.sh 脚本中的 servers 调用
              value: "zookeeper1,zookeeper2,zookeeper3"
            - name: JVMFLAGS                    # 设置 jvm 内存大小的文件变量
              value: "-Xmx2G"
          ports:                                # 监听端口信息
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:                         # pod 挂载信息
          - mountPath: "/zookeeper/data"        # 挂载到容器目录,打镜像时指定 zoo.cfg 文件里有
            name: zookeeper-datadir-pvc-1       # 匹配需要挂载卷的名称
      volumes:
        - name: zookeeper-datadir-pvc-1         # 提供给 pod 的挂载名字
          persistentVolumeClaim:                # 存储类型 pvc
            claimName: zookeeper-datadir-pvc-1  # 指定需要匹配的 pvc 名字
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  name: zookeeper2
  namespace: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "2"                      # 定义 server-id="2" 标签给 svc 匹配           
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: hub.zhangguiyuan.com/baseimage/zookeeper:testzk11-20211014_1314 
          imagePullPolicy: Always
          env:
            - name: MYID
              value: "2"
            - name: SERVERS
              value: "zookeeper1,zookeeper2,zookeeper3"
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/zookeeper/data"
            name: zookeeper-datadir-pvc-2 
      volumes:
        - name: zookeeper-datadir-pvc-2
          persistentVolumeClaim:
            claimName: zookeeper-datadir-pvc-2
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  name: zookeeper3
  namespace: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "3"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: hub.zhangguiyuan.com/baseimage/zookeeper:testzk11-20211014_1314
          imagePullPolicy: Always
          env:
            - name: MYID
              value: "3"
            - name: SERVERS
              value: "zookeeper1,zookeeper2,zookeeper3"
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/zookeeper/data"
            name: zookeeper-datadir-pvc-3
      volumes:
        - name: zookeeper-datadir-pvc-3
          persistentVolumeClaim:
           claimName: zookeeper-datadir-pvc-3

3.创建 zookeeper

[19:24:38 root@master k8s]#kubectl apply -f zookeeper.yaml 

4.查看 pod 已经运行

[19:24:49 root@master k8s]#kubectl get pod -n zookeeper 
NAME                          READY   STATUS    RESTARTS   AGE
zookeeper1-76f4cfcf9d-2mrnp   1/1     Running   0          109s
zookeeper2-db4d698d7-vrlsw    1/1     Running   0          109s
zookeeper3-867cc94cc6-mjl8b   1/1     Running   0          109s

1.5.5 验证数据和集群状态

1.现在 pod 已经起来,我们最主要的是要验证我们的数据是否会存放到我们的 nfs 服务器上

# 在 NFS 存储上验证数据已经写入

# 绑定的 id 匹配成功
[19:26:52 root@harbor_nfs ~]#cat /data/k8sdata/zookeeper/zookeeper-datadir-3/myid 
3
[19:27:35 root@harbor_nfs ~]#cat /data/k8sdata/zookeeper/zookeeper-datadir-2/myid 
2
[19:27:38 root@harbor_nfs ~]#cat /data/k8sdata/zookeeper/zookeeper-datadir-1/myid 
1

2.验证我们的 zookeeper 起来之后是否是一个成功的集群

# 进入 zookeeper1 的 pod 中
[19:25:19 root@master k8s]#kubectl exec -it -n zookeeper zookeeper1-76f4cfcf9d-2mrnp /bin/bash

# 查看当前 zookeeper 状态
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower          # 模式为 follower 或者 leader 就表示 zookeeper 集群部署成功

1.5.6 验证是否能够成功选举

1.找到 zookeeper leader 节点

# zookeeper3 pod 中状态为 leader
[19:31:45 root@master k8s]#kubectl exec -it -n zookeeper zookeeper3-867cc94cc6-mjl8b /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: leader                    # zookeeper3 是一个 leader 节点

2.这时候我们如果把 zookeeper3 pod delete 掉观察是否能够正常选举

[19:34:09 root@master k8s]#kubectl delete pod -n zookeeper zookeeper3-867cc94cc6-mjl8b 

3.再次查看 zookeeper3 pod 已经被重新拉起

[19:34:32 root@master k8s]#kubectl get pod -n zookeeper 
NAME                          READY   STATUS    RESTARTS   AGE
zookeeper1-76f4cfcf9d-2mrnp   1/1     Running   0          11m
zookeeper2-db4d698d7-vrlsw    1/1     Running   0          11m
zookeeper3-867cc94cc6-tfhmb   1/1     Running   0          19s  # 重新拉起

4.进入 zookeeper3 pod

[19:34:38 root@master k8s]#kubectl exec -it -n zookeeper zookeeper3-867cc94cc6-tfhmb /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.3#  /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower          # 角色已经变成了追随者,说明 leader 已被重新选举

5.leader 已经被选举给了 zookeeper2

[19:35:24 root@master k8s]#kubectl exec -it -n zookeeper zookeeper2-db4d698d7-vrlsw /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.3#  /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg

1.5.7 K8S 集群外访问验证

如果我们的环境全部是跑在 K8S 以内的话我们就不需要通过外部访问了直接用 clusterIP 即可。但是也不排除我们 K8S 集群外的服务器访问,所以就需要通过 nodeport 的方式来访问

1.这里我使用 nfs 服务器来 telnet K8S 节点的 32181 端口就实现了访问

# 链接成功
[19:27:41 root@harbor_nfs ~]#telnet 10.0.0.100 32181
Trying 10.0.0.100...
Connected to 10.0.0.100.
Escape character is '^]'.
暂无评论

发送评论 编辑评论


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇
下一篇