项目地址:https://github.com/As9530272755/k8s-Of-Redis
基于 pv/pvc 保存数据,实现 K8S 中运行 redis 服务,一般都是单机的一个项目对应一个 redis
4.1 创建 centos 基础镜像
1.编写 Dockerfile
[11:49:55 root@k8s-master centos]#vim Dockerfile
#自定义Centos 基础镜像
FROM centos:7.8.2003
MAINTAINER zhanggy
ADD filebeat-7.6.2-x86_64.rpm /tmp
RUN yum install -y /tmp/filebeat-7.6.2-x86_64.rpm vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop && rm -rf /etc/localtime /tmp/filebeat-7.6.2-x86_64.rpm && ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && useradd www -u 2020 && useradd nginx -u 2021
2.下载 filebeat
[11:39:51 root@k8s-master centos]#wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.2-x86_64.rpm
3.编写构建脚本
[11:52:36 root@k8s-master centos]#vim build-command.sh
docker build -t hub.zhangguiyuan.com/baseimage/centos-base:7.8.2003 .
docker push hub.zhangguiyuan.com/baseimage/centos-base:7.8.2003
4.执行脚本
[11:54:03 root@k8s-master centos]#bash build-command.sh
4.2 创建 redis 镜像
1.下载 redis-4.0.14.tar.gz
[11:57:21 root@k8s-master redis]# wget http://download.redis.io/releases/redis-4.0.14.tar.gz
2.编写运行脚本
[14:16:54 root@k8s-master redis]#vim run_redis.sh
/usr/sbin/redis-server /usr/local/redis/redis.conf
tail -f /etc/hosts
3.编写 dockerfile 文件
[11:57:21 root@k8s-master redis]#vim Dockerfile
#Redis Image
FROM hub.zhangguiyuan.com/baseimage/centos-base:7.8.2003
MAINTAINER zgy
ADD redis-4.0.14.tar.gz /usr/local/src
RUN ln -sv /usr/local/src/redis-4.0.14 /usr/local/redis && cd /usr/local/redis && make && cp src/redis-cli /usr/sbin/ && cp src/redis-server /usr/sbin/ && mkdir -pv /data/redis-data
# 导入 redis 配置文件。里面定义了 redis 数据的存储挂载
ADD redis.conf /usr/local/redis/redis.conf
ADD run_redis.sh /usr/local/redis/run_redis.sh
EXPOSE 6379
CMD ["/usr/local/redis/run_redis.sh"]
4.编写构建脚本
[14:18:40 root@k8s-master redis]#vim build-command.sh
TAG=$1
docker build -t hub.zhangguiyuan.com/baseimage/redis:${TAG} .
sleep 3
docker push hub.zhangguiyuan.com/baseimage/redis:${TAG}
5.给所有脚本添加执行权限
[14:19:14 root@k8s-master redis]#chmod +x *.sh
6.运行构建脚本
[14:23:20 root@k8s-master redis]#bash build-command.sh v4.0.14
7.测试镜像运行成功
[14:23:45 root@k8s-master redis]#docker run -it --rm hub.zhangguiyuan.com/baseimage/redis:v4.0.14
6:C 20 Oct 14:24:01.470 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
6:C 20 Oct 14:24:01.470 # Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=6, just started
6:C 20 Oct 14:24:01.470 # Configuration loaded
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 8a3f7f7775d3
4.3 编写 redis yaml 文件
跑在 K8S 的前提是,redis 的快照文件或者 AOF 文件 RDB 文件放在什么地方。如果需要开启对应的功能的话只要在构建镜像的之前将配置文件修改成对应的即可。
4.3.1 创建 pv/pvc
1.编写 namespace
[14:32:36 root@k8s-master K8sYaml]#vim redis-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
name: redis
2.编写 pv
[14:30:26 root@k8s-master pv]#vim redis-persistentvolume.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-datadir-pv-1
namespace: redis
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs: # nfs 服务器
path: /data/k8sdata/redis/redis-datadir-1
server: 10.0.0.133
3.编写 pvc
[14:33:33 root@k8s-master K8sYaml]#vim redis-persistentvolumeclaim.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-datadir-pvc-1
namespace: redis
spec:
volumeName: redis-datadir-pv-1 # 绑定对应的 pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
4.到 NFS 服务上创建目录
[14:35:19 root@harbor-nfs ~]#mkdir -p /data/k8sdata/redis/redis-datadir-1
5.创建
[14:40:56 root@k8s-master K8sYaml]#kubectl apply -f redis-ns.yaml
[14:40:42 root@k8s-master K8sYaml]#kubectl apply -f redis-persistentvolume.yaml
[14:41:04 root@k8s-master K8sYaml]#kubectl apply -f redis-persistentvolumeclaim.yaml
6.验证 pvc 已绑定
[14:41:06 root@k8s-master K8sYaml]#kubectl get pvc -n redis
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
redis-datadir-pvc-1 Bound redis-datadir-pv-1 10Gi RWO 29s
4.3.2 创建 redis
1.编写 redis yaml 文件
[14:49:43 root@k8s-master K8sYaml]#cat redis.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: devops-redis
name: deploy-devops-redis
namespace: redis
spec:
replicas: 1
selector:
matchLabels:
app: devops-redis
template:
metadata:
labels:
app: devops-redis
spec:
containers:
- name: redis-container
# 镜像替换为我们本地的
image: hub.zhangguiyuan.com/baseimage/redis:v4.0.14
imagePullPolicy: IfNotPresent
# 挂载至容器的目录
volumeMounts:
# 这个位置一定要和打镜像的时候配置文件指定了 rbd、afo 目录
- mountPath: "/data/redis-data/"
name: redis-datadir
# 定义挂载绑定 pvc
volumes:
- name: redis-datadir
persistentVolumeClaim:
claimName: redis-datadir-pvc-1
---
kind: Service
apiVersion: v1
metadata:
labels:
app: devops-redis
name: srv-devops-redis
namespace: redis
spec:
type: NodePort
ports:
- name: http
port: 6379
targetPort: 6379
nodePort: 26379
selector:
app: devops-redis
# 会话保持基于源地址保持,如果是同一个客户端请求就转到同一个客户端上去
# 一般用在没有坐 session 共享的时候会基于这个做会话保持,类似于源地址和会话哈希一样
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
2.创建
[15:05:53 root@k8s-master K8sYaml]#kubectl apply -f redis.yaml
3.创建成功
[15:06:02 root@k8s-master K8sYaml]#kubectl get pod -n redis
NAME READY STATUS RESTARTS AGE
deploy-devops-redis-5f7d956fc9-lvwzs 1/1 Running 0 3m17s
4.3.3 进入 pod 验证
进入 pod ,我们可以想办法验证他的数据是否写入到了存储上,K8S 以外的节点来链接是否能够访问成功
1.端口开启
[15:08:47 root@k8s-master K8sYaml]#kubectl exec -it -n redis deploy-devops-redis-5f7d956fc9-lvwzs /bin/bash
# 6379 端口开启
[root@deploy-devops-redis-5f7d956fc9-lvwzs /]# ss -ntl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:6379 *:*
2.在 redis pod 中写入数据
[root@deploy-devops-redis-5f7d956fc9-lvwzs /]# redis-cli
127.0.0.1:6379> AUTH 123456
OK
# 写入 KEY 是 zgy 值是 1314
127.0.0.1:6379> SET zgy 1314
OK
# 数据已经写入
127.0.0.1:6379> GET zgy
"1314"
而且我在 redis 配置文件中配置的是每隔 5s key 发生变换 redis 就会做 rdb 快照,至于说保持在什么地方就看刚才挂载的 /data/ 路径存储在那个地方
3.到 nfs 节点验证
[15:28:21 root@harbor-nfs ~]#ls /data/k8sdata/redis/redis-datadir-1/
dump.rdb
4.3.4 通过 win 客户端验证
这里我通过 Redis Desktop Manager 进行写入和读取数据演示
我们可以看到已经能够读取数据
往 reids 中写入数据,这写入了 test windows 的字样
在 pod 中验证
# 已经写入成功
127.0.0.1:6379> get zgy
"1314\ntest windows"