完整项目地址:https://github.com/As9530272755/k8s-nginx-tomcat
自己定义镜像将 nginx 和 tomcat 做出来,然后分别在 K8S 上运行起来
在工作中尽量推荐分层构建镜像,一般都是先构建一个基础镜像,然后再通过 base 镜像在构建其他的业务镜像
-
先下载官方的系统镜像,
-
再构建基础镜像:在这个镜像中安装我们常用的一些基础包,
-
构建服务镜像基:础镜像构建好之后往后可能会做一些其他服务的分支镜像,但是这一层我们不会存放研发写的代码和配置文件
-
构建业务镜像:如图的 AAA 业务和 BBB 业务,不同的业务和项目都可以基于我们的 nginx 或者其他服务镜像
-
后期我们做业务变更都是基于最后的业务镜像来重新构建
-
最后再将研发写好的代码打入到我们的业务镜像,然后在往 harbor 上传,最后 K8S 基于 harbor 实现 pod 的拉起
2.1 该架构解析
SVC 转发流程架构
我们需要在 K8S 中跑一个 nginx 和 tomcat,nginx svc 通过 nodeport 的方式对外提供访问,当我们一旦访问 nginx-svc 就访问到了 nginx 的 pod 上,当然我们的 nginx-svc 可以通过 我们的 LB 层进行代理访问,nginx 只能提供静态访问。
如果需要提供动态访问,那就交给别的服务来处理这里我以 tomcat 为例子,这个地方会有一个 tomcat-service ,如果说我们的访问是需要访问这个 tomcat 一定要有一个 URL 或者在 nginx 中配置 location
来调用 tomcat 的路由,这个流程的话是先调用 tomcat-svc 然后 tomcat-svc 在转发给 tomcat-pod ,当然我们这里 tomcat 可能有多个为了防止单点失败。
微服务转发流程架构
目前我在公司做的可能比较多的就是通过注册中心去做服务发现,因为通过 svc 来访问的话会有一定的瓶颈,每个请求报文都需要通过 svc 来做一层转发,如果通过注册中心的就完全不需要 svc 了,而实直接将服务放到 zookeeper 中,让服务通过 zookeeper 拿到后端服务器地址,拿到之后的话这些服务之间就可以直接调用,当然这个需要研发在代码上直接实现
这样的话就要先启动一个注册中心,pod 起来之后呢会自定注册到注册中心,pod 之间如果需要调用的话会在注册中心拿到后端容器地址,这时候如果由用户需要像后端访问转发的话,那么nginx-pod 就会直接转发给 tomcat-pod 这种转发方式就绕过了 svc
pod 之间直接通过地址段来访问,而不需要通过 svc,这样的话性能最好当然需要开发实现
2.2 构建基础镜像
如果我们要安装别的 filebeat-7.6.2-x86_64.rpm 版本就去官网下载即可
https://www.elastic.co/cn/downloads/beats/filebeat
1.下载 filebeat-7.6.2-x86_64.rpm
[11:19:41 root@master centos]#wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.2-x86_64.rpm
2.编写 dockerfile 文件
[11:06:31 root@master centos]#vim Dockerfile
#自定义Centos 基础镜像
FROM centos:7.8.2003
MAINTAINER zhang.g.y
# 用于做日志收集使用,拷贝到 tmp 目录
ADD filebeat-7.6.2-x86_64.rpm /tmp
# 安装依赖
RUN yum install -y /tmp/filebeat-7.6.2-x86_64.rpm vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop && rm -rf /etc/localtime /tmp/filebeat-7.6.2-x86_64.rpm && ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && useradd www -u 2020 && useradd nginx -u 2021
3.编写构建脚本
[11:24:21 root@master centos]#vim build-command.sh
# 构建镜像
docker build -t hub.zhangguiyuan.com/baseimage/centos-base:7.8.2003 .
# 推送镜像
docker push hub.zhangguiyuan.com/baseimage/centos-base:7.8.2003
4.执行构建脚本
[11:21:26 root@master centos]#bash build-command.sh
2.3 nginx 步骤
2.3.1 构建 nginx 基础镜像
该镜像是基于刚才构建的 centos 镜像构建的
1.编写 DockerFile
[14:36:38 root@master nginx-base]#vim Dockerfile
#Nginx Base Image
FROM hub.zhangguiyuan.com/baseimage/centos-base:7.8.2003
MAINTAINER zhangguiyuan
# 安装依赖
RUN yum install -y vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop
# 添加nginx 安装包
ADD nginx-1.18.0.tar.gz /usr/local/src/
# 编译安装
RUN cd /usr/local/src/nginx-1.18.0 && ./configure && make && make install && ln -sv /usr/local/nginx/sbin/nginx /usr/sbin/nginx &&rm -rf /usr/local/src/nginx-1.18.0.tar.gz
2.编写构建脚本
[14:41:06 root@master nginx-base]#vim build-command.sh
docker build -t hub.zhangguiyuan.com/baseimage/nginx-base:v1.18.0 .
sleep 1
docker push hub.zhangguiyuan.com/baseimage/nginx-base:v1.18.0
3.执行
[14:41:18 root@master nginx-base]#. build-command.sh
4.验证 nginx 是否安装成功
[14:44:00 root@master nginx-base]#docker run -it -p 8081:80 hub.zhangguiyuan.com/baseimage/nginx-base:v1.18.0
# 运行 nginx
[root@38a2560174bc /]# nginx
[root@38a2560174bc /]# ss -ntl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:80 *:*
5.浏览器访问
这样的话 nginx-1.18.0 基础镜像就构建好了
2.3.2 构建业务镜像
当我们现在构建好了 nginx 的基础镜像之后我们就可以来构建业务镜像了
1.在 harbor 上创建一个 v1 的项目
2.编写 dockerfile
[14:55:41 root@master nginx]#vim Dockerfile
#Nginx 1.18.0
FROM hub.zhangguiyuan.com/baseimage/nginx-base:v1.18.0
# 替换配置文件
ADD nginx.conf /usr/local/nginx/conf/nginx.conf
# 用于等会测试的 url
ADD app1.tar.gz /usr/local/nginx/html/webapp/
# nginx 首页 url
ADD index.html /usr/local/nginx/html/index.html
#静态资源挂载路径
RUN mkdir -p /usr/local/nginx/html/webapp/static /usr/local/nginx/html/webapp/images
EXPOSE 80 443
CMD ["nginx"]
3.编写构建镜像脚本
[14:56:56 root@master nginx]#cat build-command.sh
TAG=$1
docker build -t hub.zhangguiyuan.com/v1/nginx-web1:${TAG} .
echo "镜像构建完成,即将上传到harbor"
sleep 1
docker push hub.zhangguiyuan.com/v1/nginx-web1:${TAG}
echo "镜像上传到harbor完成"
# ${TAG} :构建时传递 tag
4.编写 index.html 文件
[14:56:58 root@master nginx]#cat index.html
nginx web1 test v1
# 编写 webapp 目录下的 index 文件
[15:06:39 root@master nginx]#cat webapp/index.html
lang="en">
charset="UTF-8">
Devops
zgy devops v11111111
5.编写nginx 配置文件
user nginx nginx;
worker_processes auto;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
daemon off;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
# 这里写的是我们的 tomcat 解析代理
#upstream tomcat_webserver {
# 这里是一个 K8S 内部的域名解析:..svc.cluster.local
# server web-tomcat-app1-service.web.svc.cluster.local:80;
#}
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
location /webapp {
root html;
index index.html index.htm;
}
# 等会 tomcat pod 启动时取消注释,因为这个地方会进行转发
# location /myapp {
# proxy_pass http://tomcat_webserver;
# proxy_set_header Host $host;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Real-IP $remote_addr;
# }
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443 ssl;
# server_name localhost;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 5m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}
6.创建 app1.tar.gz 压缩包
后期这个压缩包里面可以是我们的业务前端代码或者是前端业务目录
[15:13:10 root@master nginx]#mkdir tmp
[15:13:17 root@master nginx]#cd tmp/
[15:13:40 root@master tmp]#vim index.html
lang="en">
charset="UTF-8">
Devops
zgy devops v11111111
# 压缩
[15:14:00 root@master tmp]#tar zcvf app1.tar.gz index.html
index.html
# 移动到上一级目录
[15:16:43 root@master tmp]#cp app1.tar.gz ../
[15:17:05 root@master tmp]#cd ..
# 删除临时目录
[15:17:12 root@master nginx]#rm -fr tmp/
7.当前目录结构
[15:17:31 root@master nginx]#tree
.
├── app1.tar.gz # 后期这个压缩包可以是目录或前端代码
├── build-command.sh
├── Dockerfile
├── index.html
├── nginx.conf
└── webapp
└── index.html
8.执行构建脚本
[15:17:44 root@master nginx]#bash build-command.sh v1
9.验证镜像是否构建成功,进入容器
[15:38:06 root@master nginx]#docker run -it --rm hub.zhangguiyuan.com/v1/nginx-web1:v1 /bin/bash
# 验证配置文件
[root@7c4f50d51b17 /]# cat /usr/local/nginx/conf/nginx.conf
user nginx nginx;
worker_processes auto;
...省略...
# 这里写的是我们的 tomcat 解析代理
#upstream tomcat_webserver {
# server web-tomcat-app1-service.zgy.svc.cluster.local:80;
#}
...省略...
没有问题的话我们就可以将他跑在 K8S 中
2.3.3 在 K8S 中运行 nginx
1.编写 yaml 文件
15:48:56 root@master k8syaml #vim nginx.yaml
# 创建 namespace
apiVersion v1
kind Namespace
metadata
name web
# 创建 Deployment
---
kind Deployment
apiVersion apps/v1
metadata
labels
app web-nginx-deployment-label
name web-nginx-deployment
namespace web
spec
replicas1
selector
matchLabels
app web-nginx-selector
template
metadata
labels
app web-nginx-selector
spec
containers
name web-nginx-container
image hub.zhangguiyuan.com/v1/nginx-web1 v1
imagePullPolicy Always
ports
containerPort80
protocol TCP
name http
containerPort443
protocol TCP
name https
env
name"password"
value"123456"
name"age"
value"20"
resources
limits
cpu2
memory 2Gi
requests
cpu 500m
memory 1Gi
volumeMounts
name web-images
mountPath /usr/local/nginx/html/webapp/images
readOnlyfalse
name web-static
mountPath /usr/local/nginx/html/webapp/static
readOnlyfalse
volumes
name web-images
nfs
server10.0.0.103
path /data/k8sdata/web/images
name web-static
nfs
server10.0.0.103
path /data/k8sdata/web/static
---
kind Service
apiVersion v1
metadata
labels
app web-nginx-service-label
name web-nginx-service
namespace web
spec
type NodePort
ports
name http
port80
protocol TCP
targetPort80
nodePort20002
name https
port443
protocol TCP
targetPort443
nodePort20443
selector
app web-nginx-selector
2.在 NFS 服务器上创建共享目录
[15:47:48 root@harbor_nfs harbor]#mkdir -p /data/k8sdata/web/images
[15:51:26 root@harbor_nfs harbor]#mkdir -p /data/k8sdata/web/static
3.创建 nginx pod
[15:54:00 root@master k8syaml]#kubectl apply -f nginx.yaml
4.查看 nginx pod 已经创建成功
[15:55:13 root@master k8syaml]#kubectl get pod -n web -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-nginx-deployment-5d69489cd5-sljfl 1/1 Running 0 84s 10.200.219.69 master
2.3.3.1 验证
默认访问 /
访问 webapp url
这样的话我们的web 服务就能够跑在 K8S 上
2.3.3.2 模拟升级
后期代码升级我们只需要重新构建镜像即可,如这里我将 index.html 文件改为下面内容
1.修改升级代码
# V2 版本
[16:05:55 root@master nginx]#vim index.html
nginx web1 test v2222
2.重新构建镜像
[16:07:12 root@master nginx]#bash build-command.sh v2
3.修改 yaml 文件中的镜像版本即可
[16:08:36 root@master nginx_tomcat]#vim k8syaml/nginx.yaml
image: hub.zhangguiyuan.com/v1/nginx-web1:v2 # 这里改为 v2
4.重新运行
[16:09:26 root@master nginx_tomcat]#kubectl apply -f k8syaml/nginx.yaml
5.pod 已启动
[16:10:33 root@master nginx_tomcat]#kubectl get pod -n web -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-nginx-deployment-684c9b6c46-wm9h5 1/1 Running 0 39s 10.200.247.20 node-2
6.浏览器访问验证已经变更成功
nginx 起来之后呢接下来就是开始构建 tomcat
2.4 tomcat 步骤
2.4.1 构建 JDK 镜像
在构建 tomcat 之前需要构建 JDK 镜像
1.编写 dockerfile
[16:31:22 root@master jdk-1.8.212]#vim Dockerfile
#JDK Base Image 还是基于gangcaide base 镜像
FROM hub.zhangguiyuan.com/baseimage/centos-base:7.8.2003
MAINTAINER zhangguiyuan
ADD jdk-8u212-linux-x64.tar.gz /usr/local/src/
RUN ln -sv /usr/local/src/jdk1.8.0_212 /usr/local/jdk
ADD profile /etc/profile
# 配置环境变量
ENV JAVA_HOME /usr/local/jdk
ENV JRE_HOME $JAVA_HOME/jre
ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/lib/
ENV PATH $PATH:$JAVA_HOME/bin
2.编写构建脚本
[16:34:13 root@master jdk-1.8.212]#vim build-command.sh
docker build -t hub.zhangguiyuan.com/baseimage/jdk-base:v8.212 .
sleep 1
docker push hub.zhangguiyuan.com/baseimage/jdk-base:v8.212
3.配置文件
[16:34:58 root@master jdk-1.8.212]#vim profile
# /etc/profile
# System wide environment and startup programs, for login setup
# Functions and aliases go in /etc/bashrc
# It's NOT a good idea to change this file unless you know what you
# are doing. It's much better to create a custom.sh shell script in
# /etc/profile.d/ to make custom changes to your environment, as this
# will prevent the need for merging in future updates.
pathmunge () {
case ":${PATH}:" in
*:"$1":*)
;;
*)
if [ "$2" = "after" ] ; then
PATH=$PATH:$1
else
PATH=$1:$PATH
fi
esac
}
if [ -x /usr/bin/id ]; then
if [ -z "$EUID" ]; then
# ksh workaround
EUID=`/usr/bin/id -u`
UID=`/usr/bin/id -ru`
fi
USER="`/usr/bin/id -un`"
LOGNAME=$USER
MAIL="/var/spool/mail/$USER"
fi
# Path manipulation
if [ "$EUID" = "0" ]; then
pathmunge /usr/sbin
pathmunge /usr/local/sbin
else
pathmunge /usr/local/sbin after
pathmunge /usr/sbin after
fi
HOSTNAME=`/usr/bin/hostname 2>/dev/null`
HISTSIZE=1000
if [ "$HISTCONTROL" = "ignorespace" ] ; then
export HISTCONTROL=ignoreboth
else
export HISTCONTROL=ignoredups
fi
export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL
# By default, we want umask to get set. This sets it for login shell
# Current threshold for system reserved uid/gids is 200
# You could check uidgid reservation validity in
# /usr/share/doc/setup-*/uidgid file
if [ $UID -gt 199 ] && [ "`/usr/bin/id -gn`" = "`/usr/bin/id -un`" ]; then
umask 002
else
umask 022
fi
for i in /etc/profile.d/*.sh /etc/profile.d/sh.local ; do
if [ -r "$i" ]; then
if [ "${-#*i}" != "$-" ]; then
. "$i"
else
. "$i" >/dev/null
fi
fi
done
unset i
unset -f pathmunge
export LANG=en_US.UTF-8
export HISTTIMEFORMAT="%F %T `whoami` "
export JAVA_HOME=/usr/local/jdk
export TOMCAT_HOME=/apps/tomcat
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$TOMCAT_HOME/bin:$PATH
export CLASSPATH=.$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.jar
4.执行构建镜像脚本
[16:37:08 root@master jdk-1.8.212]#bash build-command.sh
5.验证镜像
[16:55:26 root@master jdk-1.8.212]#docker run -it --rm hub.zhangguiyuan.com/baseimage/jdk-base:v8.212 /bin/bash
[root@b20e83d131d7 /]# java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)
2.4.2 构建 tomcat 基础镜像
1.下载tomcat 安装包
[16:58:42 root@master tomcat-base]#wget http://mirror.bit.edu.cn/apache/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz
2.编写 dockerfile
[16:58:42 root@master tomcat-base]#vim Dockerfile
#Tomcat 8.5.43基础镜像
FROM hub.zhangguiyuan.com/baseimage/jdk-base:v8.212
MAINTAINER zhangguiyaun
# apps 用来存放 tomcat 运行程序
# /data/tomcat/webapps 用来存储业务代码
RUN mkdir /apps /data/tomcat/webapps /data/tomcat/logs -pv
# tomcat 安装包
ADD apache-tomcat-8.5.43.tar.gz /apps
# 创建 tomcat 用户,后期需要使用 nginx 用户来启动
# 不然后期可能会报权限拒绝 403 的状态码
# 前端 nginx 是通过什么用户启动后端 tomcat 就需要使用该用户启动
RUN useradd tomcat -u 2022 && ln -sv /apps/apache-tomcat-8.5.43 /apps/tomcat && chown -R tomcat.tomcat /apps /data -R
3.编写构建脚本
[17:06:26 root@master tomcat-base]#vim build-command.sh
docker build -t hub.zhangguiyuan.com/baseimage/tomcat-base:v8.5.43 .
sleep 3
docker push hub.zhangguiyuan.com/baseimage/tomcat-base:v8.5.43
4.执行脚本
[17:06:55 root@master tomcat-base]#. build-command.sh
2.4.3 构建业务镜像
1.编写 server.xml 文件,这个文件用于提供路由访问配置
[17:17:08 root@master tomcat-app1]#cat server.xml
<Server port="8005" shutdown="SHUTDOWN">
<Listener className="org.apache.catalina.startup.VersionLoggerListener" />
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
<Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />
<GlobalNamingResources>
<Resource name="UserDatabase" auth="Container"
type="org.apache.catalina.UserDatabase"
description="User database that can be updated and saved"
factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
pathname="conf/tomcat-users.xml" />
GlobalNamingResources>
<Service name="Catalina">
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
<Engine name="Catalina" defaultHost="localhost">
<Realm className="org.apache.catalina.realm.LockOutRealm">
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
Realm>
<Host name="localhost" appBase="/data/tomcat/webapps" unpackWARs="false" autoDeploy="false">
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log" suffix=".txt"
pattern="%h %l %u %t "%r" %s %b" />
Host>
Engine>
Service>
Server>
2.编辑 dockerfile
[17:21:07 root@master tomcat-app1]#cat Dockerfile
#tomcat web1
FROM hub.zhangguiyuan.com/baseimage/tomcat-base:v8.5.43
# 添加tomcat 配置文件
ADD catalina.sh /apps/tomcat/bin/catalina.sh
# URL 路由配置文件
ADD server.xml /apps/tomcat/conf/server.xml
#ADD myapp/* /data/tomcat/webapps/myapp/
# 业务代码路径,这个路径和 nginx 的 location 是一致的
# 一旦访问 /myapp/ 就将请求转发给了 tomcat
ADD app1.tar.gz /data/tomcat/webapps/myapp/
# 启动 tomcat
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
#ADD filebeat.yml /etc/filebeat/filebeat.yml
RUN chown -R nginx.nginx /data/ /apps/
#ADD filebeat-7.5.1-x86_64.rpm /tmp/
#RUN cd /tmp && yum localinstall -y filebeat-7.5.1-amd64.deb
EXPOSE 8080 8443
# 启动 tomcat
CMD ["/apps/tomcat/bin/run_tomcat.sh"]
3.编写启动 tomcat 脚本
[17:20:39 root@master tomcat-app1]#vim run_tomcat.sh
#echo "nameserver 223.6.6.6" > /etc/resolv.conf
#echo "192.168.7.248 k8s-vip.example.com" >> /etc/hosts
# filebeat 我先不启动,在日志收集的时候才需要
#/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
# 通过 nginx 用户启动 tomcat
su - nginx -c "/apps/tomcat/bin/catalina.sh start"
# 挂后台运行容器
tail -f /etc/hosts
4.编写构建脚本
[17:25:09 root@master tomcat-app1]#vim build-command.sh
TAG=$1
docker build -t hub.zhangguiyuan.com/v1/tomcat-app1:${TAG} .
sleep 3
docker push hub.zhangguiyuan.com/v1/tomcat-app1:${TAG}
5.给当前的所有 sh 脚本添加执行权限
[17:30:47 root@master tomcat-app1]#chmod +x *.sh
6.创建业务代码
[18:00:29 root@master tmp]#cat dir1/index.html
dir1
[18:00:36 root@master tmp]#cat dir2/index.html
dir2
[18:00:39 root@master tmp]#cat index1.html
index1
[18:00:44 root@master tmp]#cat index.html
lang="en">
charset="UTF-8">
ZGY 门户网站
v111111111111
zgy test v222222222222
zgy test v333333333333
zgy test v444444444444
zgy test v555555555555
zgy test v666666666666
zgy test v777777777777
zgy test v888888888888
zgy test v999999999999
zgy test v100000000000
zgy test v111111111111
zgy test v222222222222
[18:00:47 root@master tmp]#tar zcvf app1.tar.gz ./*
./dir1/
./dir1/index.html
./dir2/
./dir2/index.html
./index1.html
./index.html
[18:01:14 root@master tmp]#cp app1.tar.gz ../
[18:01:37 root@master tomcat-app1]#rm -fr tmp/
5.执行构建脚本
[17:26:28 root@master tomcat-app1]#. build-command.sh v1
6.验证 tomcat 镜像是否能够正常使用
[17:54:09 root@master tomcat-app1]#docker run -it --rm -p 7070:8080 hub.zhangguiyuan.com/v1/tomcat-app1:v1
7.浏览器访问
2.4.4 K8S 运行 tomcat
1.创建 K8S yaml 目录
[18:06:04 root@master nginx_tomcat]#cd tomcat-dockerfile/
[18:06:16 root@master tomcat-dockerfile]#mkdir k8syaml
2.编写yaml 文件
18:14:09 root@master k8syaml #vim tomcat-app1.yaml kind Deployment
apiVersion apps/v1
metadata
labels
app web-tomcat-app1-deployment-label
name web-tomcat-app1-deployment
namespace web
spec
replicas1
selector
matchLabels
app web-tomcat-app1-selector
template
metadata
labels
app web-tomcat-app1-selector
spec
containers
name web-tomcat-app1-container
image hub.zhangguiyuan.com/v1/tomcat-app1 v1
imagePullPolicy Always
ports
containerPort8080
protocol TCP
name http
env
name"password"
value"123456"
name"age"
value"18"
resources
limits
cpu1
memory"512Mi"
requests
cpu 500m
memory"512Mi"
volumeMounts
name web-images
mountPath /usr/local/nginx/html/webapp/images
readOnlyfalse
name web-static
mountPath /usr/local/nginx/html/webapp/static
readOnlyfalse
volumes
name web-images
nfs
server10.0.0.103
path /data/k8sdata/web/images
name web-static
nfs
server10.0.0.103
path /data/k8sdata/web/static
---
kind Service
apiVersion v1
metadata
labels
app web-tomcat-app1-service-label
name web-tomcat-app1-service
namespace web
spec
type NodePort
ports
name http
port80
protocol TCP
targetPort8080
nodePort 28080 # tomcat 本身是不用对外暴露端口该端口只用于 pod 起来之后测试
selector
app web-tomcat-app1-selector
3.运行
[18:15:27 root@master k8syaml]#kubectl apply -f tomcat-app1.yaml
deployment.apps/web-tomcat-app1-deployment created
service/web-tomcat-app1-service created
[18:15:34 root@master k8syaml]#kubectl get pod -n web
NAME READY STATUS RESTARTS AGE
web-nginx-deployment-684c9b6c46-wm9h5 1/1 Running 0 125m
web-tomcat-app1-deployment-7f94ff6d64-sphdp 1/1 Running 0 3s
2.4.4.1 验证
tomcat 起来之后我们需要到 nginx 的 pod 中验证是否能够访问到 tomcat
# 进入到 nginx pod
[18:15:58 root@master k8syaml]#kubectl exec -it -n web web-nginx-deployment-684c9b6c46-wm9h5 /bin/bash
# 能够 ping 通 tomcat 的 svc 域名解析
[root@web-nginx-deployment-684c9b6c46-wm9h5 /]# ping web-tomcat-app1-service.web.svc.cluster.local
PING web-tomcat-app1-service.web.svc.cluster.local (10.111.179.51) 56(84) bytes of data.
64 bytes from web-tomcat-app1-service.web.svc.cluster.local (10.111.179.51): icmp_seq=1 ttl=64 time=0.081 ms
# 访问 /myapp/index.html url
[root@web-nginx-deployment-684c9b6c46-wm9h5 /]# curl web-tomcat-app1-service/myapp/index.html
lang="en">
charset="UTF-8">
ZGY 门户网站
v111111111111
zgy test v222222222222
zgy test v333333333333
zgy test v444444444444
zgy test v555555555555
zgy test v666666666666
zgy test v777777777777
zgy test v888888888888
zgy test v999999999999
zgy test v100000000000
zgy test v111111111111
zgy test v222222222222
只要能够访问通就表示 nginx 的域名解析没有问题
2.5 修改 nginx 镜像实现动静分离
当 nginx 能够实现对 tomcat 的访问之后还需要重新修改 nginx 的配置文件,将刚才在构建 nginx 业务镜像的时候,在 nginx 配置文件中注释了 tomcat 的域名解析和后端 location 代理
1.编辑 nginx 配置
[18:26:41 root@master k8syaml]#vim /root/nginx_tomcat/nginx-dockerfile/nginx/nginx.conf
upstream tomcat_webserver {
# 这里是一个 K8S 内部的域名解析:..svc.cluster.local
server web-tomcat-app1-service.web.svc.cluster.local:80;
}
#一旦访问 myapp 的url 就转发给 http://tomcat_webserver 上游代理
location /myapp {
proxy_pass http://tomcat_webserver;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
2.重建镜像
[18:38:15 root@master nginx]#pwd
/root/nginx_tomcat/nginx-dockerfile/nginx
# 这里我将镜像 tag 定义为 v3
[18:38:16 root@master nginx]#. build-command.sh v3
3.编辑 nginx.yaml 文件替换镜像 tag 为 v3
[18:40:19 root@master k8syaml]#vim nginx.yaml
image: hub.zhangguiyuan.com/v1/nginx-web1:v3
# 重新创建 pod
[18:40:34 root@master k8syaml]#kubectl apply -f nginx.yaml
2.6 修改 HAproxy 配置实现代理
1.修改配置
[18:31:20 root@master k8syaml]#vim /etc/haproxy/haproxy.cfg
listen k8s-nginx-configmap
bind 10.0.0.100:80
mode tcp
server nginx 10.0.0.100:20002 check inter 3s fall 3 rise 5
2.重启 ha
[18:35:19 root@master k8syaml]#systemctl restart haproxy.service
2.7 访问验证
1.浏览器访问验证成功
2.访问nginx 代理的 tomcat 页面已经实现访问
其实还有一种方式是通过 ingress ,但是 ingress 配置起来相对比较麻烦,如果没有特殊要求就是用 nginx 就行
2.7.1 在 tomcat 上传图片实现访问
1.进入 tomcat pod
[18:56:46 root@master ~]#kubectl exec -it -n web web-tomcat-app1-deployment-7f94ff6d64-gd72p /bin/bash
# 进入到将图片的目录
[root@web-tomcat-app1-deployment-7f94ff6d64-gd72p /]# cd /usr/local/nginx/html/webapp/images/
# 下载图片
[root@web-tomcat-app1-deployment-7f94ff6d64-gd72p images]# wget http://39.105.137.222:8089/wp-content/uploads/2021/10/image-20211014152432868.png
# 修改图片名称
[root@web-tomcat-app1-deployment-7f94ff6d64-gd72p images]# mv image-20211014152432868.png 1.jpg
2.浏览器访问验证