Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

v1.23.7 CentOS binary install IPv6 IPv4 Three Masters Two Slaves

cby-chen edited this page Sep 12, 2022 · 1 revision

二进制安装Kubernetes(k8s) v1.23.7 IPv4/IPv6双栈

Kubernetes 开源不易,帮忙点个star,谢谢了🌹

介绍

kubernetes二进制安装

后续尽可能第一时间更新新版本文档

1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.23.7 和 1.24.0 和1.24.1 文档以及安装包已生成。

我使用IPV6的目的是在公网进行访问,所以我配置了IPV6静态地址。

若您没有IPV6环境,或者不想使用IPv6,不对主机进行配置IPv6地址即可。

不配置IPV6,不影响后续,不过集群依旧是支持IPv6的。为后期留有扩展可能性。

项目地址:https://github.com/cby-chen/Kubernetes

每个初始版本会打上releases,安装包在releases页面

https://github.com/cby-chen/Kubernetes/releases

(下载更快)我自己的网盘:https://pan.oiox.cn/s/PetV

1.环境

主机名称 IP地址 说明 软件
Master01 10.0.0.81 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived
Master02 10.0.0.82 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived
Master03 10.0.0.83 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived
Node01 10.0.0.84 node节点 kubelet、kube-proxy、nfs-client
Node02 10.0.0.85 node节点 kubelet、kube-proxy、nfs-client
10.0.0.89 VIP
软件 版本
内核 4.18.0-373.el8.x86_64
CentOS 8 v8 或者 v7
kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy v1.23.7
etcd v3.5.4
containerd v1.6.6
cfssl v1.6.1
cni v1.1.1
crictl v1.23.0
haproxy v1.8.27
keepalived v2.1.5

网段

物理主机:192.168.1.0/24

service:10.96.0.0/12

pod:172.16.0.0/12

如果有条件建议k8s集群与etcd集群分开安装

1.1.k8s基础系统环境配置

1.2.配置IP

ssh root@10.1.1.112 "nmcli con mod ens160 ipv4.addresses 10.0.0.81/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"
ssh root@10.1.1.102 "nmcli con mod ens160 ipv4.addresses 10.0.0.82/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"
ssh root@10.1.1.104 "nmcli con mod ens160 ipv4.addresses 10.0.0.83/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"
ssh root@10.1.1.105 "nmcli con mod ens160 ipv4.addresses 10.0.0.84/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"
ssh root@10.1.1.106 "nmcli con mod ens160 ipv4.addresses 10.0.0.85/8; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160"
ssh root@10.0.0.81 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::10; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"
ssh root@10.0.0.82 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::20; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"
ssh root@10.0.0.83 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::30; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"
ssh root@10.0.0.84 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::40; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"
ssh root@10.0.0.85 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ca:9fa1:181c::50; nmcli con mod ens160 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"

1.3.设置主机名

hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-master02
hostnamectl set-hostname k8s-master03
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02

1.4.配置yum源

# 对于 CentOS 7
sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \
 -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \
 -i.bak \
 /etc/yum.repos.d/CentOS-*.repo
# 对于 CentOS 8
sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \
 -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \
 -i.bak \
 /etc/yum.repos.d/CentOS-*.repo
sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://10.0.0.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo

1.5.安装一些必备工具

yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y

1.6.选择性下载需要工具

1.下载kubernetes1.23.+的二进制包
github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md
wget https://dl.k8s.io/v1.23.7/kubernetes-server-linux-amd64.tar.gz
2.下载etcdctl二进制包
github二进制包下载地址:https://github.com/etcd-io/etcd/releases
wget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz
3.containerd二进制包下载
github下载地址:https://github.com/containerd/containerd/releases
containerd下载时下载带cni插件的二进制包。
wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz
4.下载cfssl二进制包
github二进制包下载地址:https://github.com/cloudflare/cfssl/releases
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
5.cni插件下载
github下载地址:https://github.com/containernetworking/plugins/releases
wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
6.crictl客户端二进制下载
github下载:https://github.com/kubernetes-sigs/cri-tools/releases
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz

1.7.关闭防火墙

systemctl disable --now firewalld

1.8.关闭SELinux

setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

1.9.关闭交换分区

sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a && sysctl -w vm.swappiness=0
cat /etc/fstab
# /dev/mapper/centos-swap swap swap defaults 0 0

1.10.关闭NetworkManager 并启用 network (lb除外)

systemctl disable --now NetworkManager
systemctl start network && systemctl enable network

1.11.进行时间同步

服务端
yum install chrony -y
cat > /etc/chrony.conf << EOF 
pool ntp.aliyun.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 10.0.0.8/8
local stratum 10
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF
systemctl restart chronyd
systemctl enable chronyd
客户端
yum install chrony -y
cat > /etc/chrony.conf << EOF 
pool 10.0.0.81 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF
systemctl restart chronyd ; systemctl enable chronyd
使用客户端进行验证
chronyc sources -v

1.12.配置ulimit

ulimit -SHn 65535
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF

1.13.配置免密登录

yum install -y sshpass
ssh-keygen -f /root/.ssh/id_rsa -P ''
export IP="10.0.0.81 10.0.0.82 10.0.0.83 10.0.0.84 10.0.0.85"
export SSHPASS=123123
for HOST in $IP;do
 sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST
done

1.14.添加启用源

为 RHEL-8或 CentOS-8配置源
yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
为 RHEL-7 SL-7 或 CentOS-7 安装 ELRepo 
yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
查看可用安装包
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

1.15.升级内核至4.18版本以上

安装最新的内核
# 我这里选择的是稳定版kernel-ml 如需更新长期维护版本kernel-lt 
yum --enablerepo=elrepo-kernel install kernel-ml
查看已安装那些内核
rpm -qa | grep kernel
kernel-core-4.18.0-358.el8.x86_64
kernel-tools-4.18.0-358.el8.x86_64
kernel-ml-core-5.16.7-1.el8.elrepo.x86_64
kernel-ml-5.16.7-1.el8.elrepo.x86_64
kernel-modules-4.18.0-358.el8.x86_64
kernel-4.18.0-358.el8.x86_64
kernel-tools-libs-4.18.0-358.el8.x86_64
kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64
查看默认内核
grubby --default-kernel
/boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64
若不是最新的使用命令设置
grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64
重启生效
reboot
v8 整合命令为:
yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; reboot
v7 整合命令为:
yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default \$(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel

1.16.安装ipvsadm

yum install ipvsadm ipset sysstat conntrack libseccomp -y
cat >> /etc/modules-load.d/ipvs.conf <<EOF 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
systemctl restart systemd-modules-load.service
lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 176128 1 ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 3 nf_conntrack,xfs,ip_vs

1.17.修改内核参数

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720


net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384

net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 0

EOF
sysctl --system

1.18.所有节点配置hosts本地解析

cat > /etc/hosts <<EOF
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

2408:8207:78ca:9fa1:181c::10 k8s-master01
2408:8207:78ca:9fa1:181c::20 k8s-master02
2408:8207:78ca:9fa1:181c::30 k8s-master03
2408:8207:78ca:9fa1:181c::40 k8s-node01
2408:8207:78ca:9fa1:181c::50 k8s-node02

10.0.0.81 k8s-master01
10.0.0.82 k8s-master02
10.0.0.83 k8s-master03
10.0.0.84 k8s-node01
10.0.0.85 k8s-node02
10.0.0.89 lb-vip
EOF

2.k8s基本组件安装

2.1.所有k8s节点安装Containerd作为Runtime

wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
#创建cni插件所需目录
mkdir -p /etc/cni/net.d /opt/cni/bin 
#解压cni二进制包
tar xf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/
wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz
#解压
tar -C / -xzf cri-containerd-cni-1.6.6-linux-amd64.tar.gz
#创建服务启动文件
cat > /etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF

2.1.1配置Containerd所需的模块

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

2.1.2加载模块

systemctl restart systemd-modules-load.service

2.1.3配置Containerd所需的内核

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
# 加载内核
sysctl --system

2.1.4创建Containerd的配置文件

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml
修改Containerd的配置文件
sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep SystemdCgroup
sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep sandbox_image
# 找到containerd.runtimes.runc.options,在其下加入SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
 SystemdCgroup = true
 [plugins."io.containerd.grpc.v1.cri".cni]
# 将sandbox_image默认地址改为符合版本地址
 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6"

2.1.5启动并设置为开机启动

systemctl daemon-reload
systemctl enable --now containerd

2.1.6配置crictl客户端连接的运行时位置

wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz
#解压
tar xf crictl-v1.23.0-linux-amd64.tar.gz -C /usr/bin/
#生成配置文件
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
#测试
systemctl restart containerd
crictl info

2.2.k8s与etcd下载及安装(仅在master01操作)

2.2.1解压k8s安装包

解压k8s安装文件
tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
解压etcd安装文件
tar -xf etcd-v3.5.4-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.4-linux-amd64/etcd{,ctl}
# 查看/usr/local/bin下内容
ls /usr/local/bin/
etcd etcdctl kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler

2.2.2查看版本

[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.23.7
[root@k8s-master01 ~]# etcdctl version
etcdctl version: 3.5.4
API version: 3.5
[root@k8s-master01 ~]# 

2.2.3将组件发送至其他k8s节点

Master='k8s-master02 k8s-master03'
Work='k8s-node01 k8s-node02'
for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
for NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done

2.3创建证书相关文件

mkdir pki
cd pki
cat > admin-csr.json << EOF 
{
 "CN": "admin",
 "key": {
 "algo": "rsa",
 "size": 2048
 },
 "names": [
 {
 "C": "CN",
 "ST": "Beijing",
 "L": "Beijing",
 "O": "system:masters",
 "OU": "Kubernetes-manual"
 }
 ]
}
EOF
cat > ca-config.json << EOF 
{
 "signing": {
 "default": {
 "expiry": "876000h"
 },
 "profiles": {
 "kubernetes": {
 "usages": [
 "signing",
 "key encipherment",
 "server auth",
 "client auth"
 ],
 "expiry": "876000h"
 }
 }
 }
}
EOF
cat > etcd-ca-csr.json << EOF 
{
 "CN": "etcd",
 "key": {
 "algo": "rsa",
 "size": 2048
 },
 "names": [
 {
 "C": "CN",
 "ST": "Beijing",
 "L": "Beijing",
 "O": "etcd",
 "OU": "Etcd Security"
 }
 ],
 "ca": {
 "expiry": "876000h"
 }
}
EOF
cat > front-proxy-ca-csr.json << EOF 
{
 "CN": "kubernetes",
 "key": {
 "algo": "rsa",
 "size": 2048
 },
 "ca": {
 "expiry": "876000h"
 }
}
EOF
cat > kubelet-csr.json << EOF 
{
 "CN": "system:node:\$NODE",
 "key": {
 "algo": "rsa",
 "size": 2048
 },
 "names": [
 {
 "C": "CN",
 "L": "Beijing",
 "ST": "Beijing",
 "O": "system:nodes",
 "OU": "Kubernetes-manual"
 }
 ]
}
EOF
cat > manager-csr.json << EOF 
{
 "CN": "system:kube-controller-manager",
 "key": {
 "algo": "rsa",
 "size": 2048
 },
 "names": [
 {
 "C": "CN",
 "ST": "Beijing",
 "L": "Beijing",
 "O": "system:kube-controller-manager",
 "OU": "Kubernetes-manual"
 }
 ]
}
EOF
cat > apiserver-csr.json << EOF 
{
 "CN": "kube-apiserver",
 "key": {
 "algo": "rsa",
 "size": 2048
 },
 "names": [
 {
 "C": "CN",
 "ST": "Beijing",
 "L": "Beijing",
 "O": "Kubernetes",
 "OU": "Kubernetes-manual"
 }
 ]
}
EOF
cat > ca-csr.json << EOF 
{
 "CN": "kubernetes",
 "key": {
 "algo": "rsa",
 "size": 2048
 },
 "names": [
 {
 "C": "CN",
 "ST": "Beijing",
 "L": "Beijing",
 "O": "Kubernetes",
 "OU": "Kubernetes-manual"
 }
 ],
 "ca": {
 "expiry": "876000h"
 }
}
EOF
cat > etcd-csr.json << EOF 
{
 "CN": "etcd",
 "key": {
 "algo": "rsa",
 "size": 2048
 },
 "names": [
 {
 "C": "CN",
 "ST": "Beijing",
 "L": "Beijing",
 "O": "etcd",
 "OU": "Etcd Security"
 }
 ]
}
EOF
cat > front-proxy-client-csr.json << EOF 
{
 "CN": "front-proxy-client",
 "key": {
 "algo": "rsa",
 "size": 2048
 }
}
EOF
cat > kube-proxy-csr.json << EOF 
{
 "CN": "system:kube-proxy",
 "key": {
 "algo": "rsa",
 "size": 2048
 },
 "names": [
 {
 "C": "CN",
 "ST": "Beijing",
 "L": "Beijing",
 "O": "system:kube-proxy",
 "OU": "Kubernetes-manual"
 }
 ]
}
EOF
cat > scheduler-csr.json << EOF 
{
 "CN": "system:kube-scheduler",
 "key": {
 "algo": "rsa",
 "size": 2048
 },
 "names": [
 {
 "C": "CN",
 "ST": "Beijing",
 "L": "Beijing",
 "O": "system:kube-scheduler",
 "OU": "Kubernetes-manual"
 }
 ]
}
EOF
cd ..
mkdir bootstrap
cd bootstrap
cat > bootstrap.secret.yaml << EOF 
apiVersion: v1
kind: Secret
metadata:
 name: bootstrap-token-c8ad9c
 namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
 description: "The default bootstrap token generated by 'kubelet '."
 token-id: c8ad9c
 token-secret: 2e4d610cf3e7426e
 usage-bootstrap-authentication: "true"
 usage-bootstrap-signing: "true"
 auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: kubelet-bootstrap
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
 kind: Group
 name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: node-autoapprove-bootstrap
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
 kind: Group
 name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: node-autoapprove-certificate-rotation
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
 kind: Group
 name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 annotations:
 rbac.authorization.kubernetes.io/autoupdate: "true"
 labels:
 kubernetes.io/bootstrapping: rbac-defaults
 name: system:kube-apiserver-to-kubelet
rules:
 - apiGroups:
 - ""
 resources:
 - nodes/proxy
 - nodes/stats
 - nodes/log
 - nodes/spec
 - nodes/metrics
 verbs:
 - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: system:kube-apiserver
 namespace: ""
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: system:kube-apiserver-to-kubelet
subjects:
 - apiGroup: rbac.authorization.k8s.io
 kind: User
 name: kube-apiserver
EOF
cd ..
mkdir coredns
cd coredns
cat > coredns.yaml << EOF 
apiVersion: v1
kind: ServiceAccount
metadata:
 name: coredns
 namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 labels:
 kubernetes.io/bootstrapping: rbac-defaults
 name: system:coredns
rules:
 - apiGroups:
 - ""
 resources:
 - endpoints
 - services
 - pods
 - namespaces
 verbs:
 - list
 - watch
 - apiGroups:
 - discovery.k8s.io
 resources:
 - endpointslices
 verbs:
 - list
 - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 annotations:
 rbac.authorization.kubernetes.io/autoupdate: "true"
 labels:
 kubernetes.io/bootstrapping: rbac-defaults
 name: system:coredns
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: system:coredns
subjects:
- kind: ServiceAccount
 name: coredns
 namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
 name: coredns
 namespace: kube-system
data:
 Corefile: |
 .:53 {
 errors
 health {
 lameduck 5s
 }
 ready
 kubernetes cluster.local in-addr.arpa ip6.arpa {
 fallthrough in-addr.arpa ip6.arpa
 }
 prometheus :9153
 forward . /etc/resolv.conf {
 max_concurrent 1000
 }
 cache 30
 loop
 reload
 loadbalance
 }
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: coredns
 namespace: kube-system
 labels:
 k8s-app: kube-dns
 kubernetes.io/name: "CoreDNS"
spec:
 # replicas: not specified here:
 # 1. Default is 1.
 # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
 strategy:
 type: RollingUpdate
 rollingUpdate:
 maxUnavailable: 1
 selector:
 matchLabels:
 k8s-app: kube-dns
 template:
 metadata:
 labels:
 k8s-app: kube-dns
 spec:
 priorityClassName: system-cluster-critical
 serviceAccountName: coredns
 tolerations:
 - key: "CriticalAddonsOnly"
 operator: "Exists"
 nodeSelector:
 kubernetes.io/os: linux
 affinity:
 podAntiAffinity:
 preferredDuringSchedulingIgnoredDuringExecution:
 - weight: 100
 podAffinityTerm:
 labelSelector:
 matchExpressions:
 - key: k8s-app
 operator: In
 values: ["kube-dns"]
 topologyKey: kubernetes.io/hostname
 containers:
 - name: coredns
 image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 
 imagePullPolicy: IfNotPresent
 resources:
 limits:
 memory: 170Mi
 requests:
 cpu: 100m
 memory: 70Mi
 args: [ "-conf", "/etc/coredns/Corefile" ]
 volumeMounts:
 - name: config-volume
 mountPath: /etc/coredns
 readOnly: true
 ports:
 - containerPort: 53
 name: dns
 protocol: UDP
 - containerPort: 53
 name: dns-tcp
 protocol: TCP
 - containerPort: 9153
 name: metrics
 protocol: TCP
 securityContext:
 allowPrivilegeEscalation: false
 capabilities:
 add:
 - NET_BIND_SERVICE
 drop:
 - all
 readOnlyRootFilesystem: true
 livenessProbe:
 httpGet:
 path: /health
 port: 8080
 scheme: HTTP
 initialDelaySeconds: 60
 timeoutSeconds: 5
 successThreshold: 1
 failureThreshold: 5
 readinessProbe:
 httpGet:
 path: /ready
 port: 8181
 scheme: HTTP
 dnsPolicy: Default
 volumes:
 - name: config-volume
 configMap:
 name: coredns
 items:
 - key: Corefile
 path: Corefile
---
apiVersion: v1
kind: Service
metadata:
 name: kube-dns
 namespace: kube-system
 annotations:
 prometheus.io/port: "9153"
 prometheus.io/scrape: "true"
 labels:
 k8s-app: kube-dns
 kubernetes.io/cluster-service: "true"
 kubernetes.io/name: "CoreDNS"
spec:
 selector:
 k8s-app: kube-dns
 clusterIP: 10.96.0.10 
 ports:
 - name: dns
 port: 53
 protocol: UDP
 - name: dns-tcp
 port: 53
 protocol: TCP
 - name: metrics
 port: 9153
 protocol: TCP
EOF
cd ..
mkdir metrics-server
cd metrics-server
cat > metrics-server.yaml << EOF 
apiVersion: v1
kind: ServiceAccount
metadata:
 labels:
 k8s-app: metrics-server
 name: metrics-server
 namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 labels:
 k8s-app: metrics-server
 rbac.authorization.k8s.io/aggregate-to-admin: "true"
 rbac.authorization.k8s.io/aggregate-to-edit: "true"
 rbac.authorization.k8s.io/aggregate-to-view: "true"
 name: system:aggregated-metrics-reader
rules:
- apiGroups:
 - metrics.k8s.io
 resources:
 - pods
 - nodes
 verbs:
 - get
 - list
 - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 labels:
 k8s-app: metrics-server
 name: system:metrics-server
rules:
- apiGroups:
 - ""
 resources:
 - pods
 - nodes
 - nodes/stats
 - namespaces
 - configmaps
 verbs:
 - get
 - list
 - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
 labels:
 k8s-app: metrics-server
 name: metrics-server-auth-reader
 namespace: kube-system
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: Role
 name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
 name: metrics-server
 namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 labels:
 k8s-app: metrics-server
 name: metrics-server:system:auth-delegator
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: system:auth-delegator
subjects:
- kind: ServiceAccount
 name: metrics-server
 namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 labels:
 k8s-app: metrics-server
 name: system:metrics-server
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: system:metrics-server
subjects:
- kind: ServiceAccount
 name: metrics-server
 namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
 labels:
 k8s-app: metrics-server
 name: metrics-server
 namespace: kube-system
spec:
 ports:
 - name: https
 port: 443
 protocol: TCP
 targetPort: https
 selector:
 k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
 labels:
 k8s-app: metrics-server
 name: metrics-server
 namespace: kube-system
spec:
 selector:
 matchLabels:
 k8s-app: metrics-server
 strategy:
 rollingUpdate:
 maxUnavailable: 0
 template:
 metadata:
 labels:
 k8s-app: metrics-server
 spec:
 containers:
 - args:
 - --cert-dir=/tmp
 - --secure-port=4443
 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
 - --kubelet-use-node-status-port
 - --metric-resolution=15s
 - --kubelet-insecure-tls
 - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm
 - --requestheader-username-headers=X-Remote-User
 - --requestheader-group-headers=X-Remote-Group
 - --requestheader-extra-headers-prefix=X-Remote-Extra-
 image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0
 imagePullPolicy: IfNotPresent
 livenessProbe:
 failureThreshold: 3
 httpGet:
 path: /livez
 port: https
 scheme: HTTPS
 periodSeconds: 10
 name: metrics-server
 ports:
 - containerPort: 4443
 name: https
 protocol: TCP
 readinessProbe:
 failureThreshold: 3
 httpGet:
 path: /readyz
 port: https
 scheme: HTTPS
 initialDelaySeconds: 20
 periodSeconds: 10
 resources:
 requests:
 cpu: 100m
 memory: 200Mi
 securityContext:
 readOnlyRootFilesystem: true
 runAsNonRoot: true
 runAsUser: 1000
 volumeMounts:
 - mountPath: /tmp
 name: tmp-dir
 - name: ca-ssl
 mountPath: /etc/kubernetes/pki
 nodeSelector:
 kubernetes.io/os: linux
 priorityClassName: system-cluster-critical
 serviceAccountName: metrics-server
 volumes:
 - emptyDir: {}
 name: tmp-dir
 - name: ca-ssl
 hostPath:
 path: /etc/kubernetes/pki

---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
 labels:
 k8s-app: metrics-server
 name: v1beta1.metrics.k8s.io
spec:
 group: metrics.k8s.io
 groupPriorityMinimum: 100
 insecureSkipTLSVerify: true
 service:
 name: metrics-server
 namespace: kube-system
 version: v1beta1
 versionPriority: 100
EOF

3.相关证书生成

master01节点下载证书生成工具
wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfssl
wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

3.1.生成etcd证书

特别说明除外,以下操作在所有master节点操作

3.1.1所有master节点创建证书存放目录

mkdir /etc/etcd/ssl -p

3.1.2master01节点生成etcd证书

cd Kubernetes/pki/
# 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来)
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
cfssl gencert \
 -ca=/etc/etcd/ssl/etcd-ca.pem \
 -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
 -config=ca-config.json \
 -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,10.0.0.81,10.0.0.82,10.0.0.83 \
 -profile=kubernetes \
 etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

3.1.3将证书复制到其他节点

Master='k8s-master02 k8s-master03'
for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done

3.2.生成k8s相关证书

特别说明除外,以下操作在所有master节点操作

3.2.1所有k8s节点创建证书存放目录

mkdir -p /etc/kubernetes/pki

3.2.2master01节点生成k8s证书

# 生成一个根证书
cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
# 10.96.0.1是service网段的第一个地址,需要计算,10.0.0.89为高可用vip地址
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-hostname=10.96.0.1,10.0.0.89,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,10.0.0.81,10.0.0.82,10.0.0.83,10.0.0.84,10.0.0.85,10.0.0.86,10.0.0.87,10.0.0.88,10.0.0.89,192.168.1.90,192.168.1.40,192.168.1.41 \
-profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

3.2.3生成apiserver聚合证书

cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca 
# 有一个警告,可以忽略
cfssl gencert \
-ca=/etc/kubernetes/pki/front-proxy-ca.pem \
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

3.2.4生成controller-manage的证书

cfssl gencert \
 -ca=/etc/kubernetes/pki/ca.pem \
 -ca-key=/etc/kubernetes/pki/ca-key.pem \
 -config=ca-config.json \
 -profile=kubernetes \
 manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
# 设置一个集群项
kubectl config set-cluster kubernetes \
 --certificate-authority=/etc/kubernetes/pki/ca.pem \
 --embed-certs=true \
 --server=https://10.0.0.89:8443 \
 --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 设置一个环境项,一个上下文
kubectl config set-context system:kube-controller-manager@kubernetes \
 --cluster=kubernetes \
 --user=system:kube-controller-manager \
 --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 设置一个用户项
kubectl config set-credentials system:kube-controller-manager \
 --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
 --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
 --embed-certs=true \
 --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 设置默认环境
kubectl config use-context system:kube-controller-manager@kubernetes \
 --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
cfssl gencert \
 -ca=/etc/kubernetes/pki/ca.pem \
 -ca-key=/etc/kubernetes/pki/ca-key.pem \
 -config=ca-config.json \
 -profile=kubernetes \
 scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
kubectl config set-cluster kubernetes \
 --certificate-authority=/etc/kubernetes/pki/ca.pem \
 --embed-certs=true \
 --server=https://10.0.0.89:8443 \
 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
 --client-certificate=/etc/kubernetes/pki/scheduler.pem \
 --client-key=/etc/kubernetes/pki/scheduler-key.pem \
 --embed-certs=true \
 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-context system:kube-scheduler@kubernetes \
 --cluster=kubernetes \
 --user=system:kube-scheduler \
 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config use-context system:kube-scheduler@kubernetes \
 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
cfssl gencert \
 -ca=/etc/kubernetes/pki/ca.pem \
 -ca-key=/etc/kubernetes/pki/ca-key.pem \
 -config=ca-config.json \
 -profile=kubernetes \
 admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
kubectl config set-cluster kubernetes \
 --certificate-authority=/etc/kubernetes/pki/ca.pem \
 --embed-certs=true \
 --server=https://10.0.0.89:8443 \
 --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-credentials kubernetes-admin \
 --client-certificate=/etc/kubernetes/pki/admin.pem \
 --client-key=/etc/kubernetes/pki/admin-key.pem \
 --embed-certs=true \
 --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-context kubernetes-admin@kubernetes \
 --cluster=kubernetes \
 --user=kubernetes-admin \
 --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig

3.2.5创建ServiceAccount Key ——secret

openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

3.2.6将证书发送到其他master节点

for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done

3.2.7查看证书

ls /etc/kubernetes/pki/
admin.csr apiserver-key.pem ca.pem front-proxy-ca.csr front-proxy-client-key.pem scheduler.csr
admin-key.pem apiserver.pem controller-manager.csr front-proxy-ca-key.pem front-proxy-client.pem scheduler-key.pem
admin.pem ca.csr controller-manager-key.pem front-proxy-ca.pem sa.key scheduler.pem
apiserver.csr ca-key.pem controller-manager.pem front-proxy-client.csr sa.pub
# 一共23个就对了
ls /etc/kubernetes/pki/ |wc -l
23

4.k8s系统组件配置

4.1.etcd配置

4.1.1master01配置

cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.0.0.81:2380'
listen-client-urls: 'https://10.0.0.81:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.0.0.81:2380'
advertise-client-urls: 'https://10.0.0.81:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.0.0.81:2380,k8s-master02=https://10.0.0.82:2380,k8s-master03=https://10.0.0.83:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
 cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
 key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
 client-cert-auth: true
 trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
 auto-tls: true
peer-transport-security:
 cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
 key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
 peer-client-cert-auth: true
 trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
 auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.1.2master02配置

cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.0.0.82:2380'
listen-client-urls: 'https://10.0.0.82:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.0.0.82:2380'
advertise-client-urls: 'https://10.0.0.82:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.0.0.81:2380,k8s-master02=https://10.0.0.82:2380,k8s-master03=https://10.0.0.83:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
 cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
 key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
 client-cert-auth: true
 trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
 auto-tls: true
peer-transport-security:
 cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
 key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
 peer-client-cert-auth: true
 trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
 auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.1.3master03配置

cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.0.0.83:2380'
listen-client-urls: 'https://10.0.0.83:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.0.0.83:2380'
advertise-client-urls: 'https://10.0.0.83:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.0.0.81:2380,k8s-master02=https://10.0.0.82:2380,k8s-master03=https://10.0.0.83:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
 cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
 key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
 client-cert-auth: true
 trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
 auto-tls: true
peer-transport-security:
 cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
 key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
 peer-client-cert-auth: true
 trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
 auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.2.创建service(所有master节点操作)

4.2.1创建etcd.service并启动

cat > /usr/lib/systemd/system/etcd.service << EOF

[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service

EOF

4.2.2创建etcd证书目录

mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd

4.2.3查看etcd状态

export ETCDCTL_API=3
etcdctl --endpoints="10.0.0.83:2379,10.0.0.82:2379,10.0.0.81:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 10.0.0.83:2379 | 7cb7be3df5c81965 | 3.5.2 | 20 kB | false | false | 2 | 9 | 9 | |
| 10.0.0.82:2379 | c077939949ab3f8b | 3.5.2 | 20 kB | false | false | 2 | 9 | 9 | |
| 10.0.0.81:2379 | 2ee388f67565dac9 | 3.5.2 | 20 kB | true | false | 2 | 9 | 9 | |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@k8s-master01 pki]# 

5.高可用配置

5.1在master01和master02和master03服务器上操作

5.1.1安装keepalived和haproxy服务

yum -y install keepalived haproxy

5.1.2修改haproxy配置文件(配置文件一样)

# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
cat >/etc/haproxy/haproxy.cfg<<"EOF"
global
 maxconn 2000
 ulimit-n 16384
 log 127.0.0.1 local0 err
 stats timeout 30s

defaults
 log global
 mode http
 option httplog
 timeout connect 5000
 timeout client 50000
 timeout server 50000
 timeout http-request 15s
 timeout http-keep-alive 15s


frontend monitor-in
 bind *:33305
 mode http
 option httplog
 monitor-uri /monitor

frontend k8s-master
 bind 0.0.0.0:8443
 bind 127.0.0.1:8443
 mode tcp
 option tcplog
 tcp-request inspect-delay 5s
 default_backend k8s-master


backend k8s-master
 mode tcp
 option tcplog
 option tcp-check
 balance roundrobin
 default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
 server k8s-master01 10.0.0.81:6443 check
 server k8s-master02 10.0.0.82:6443 check
 server k8s-master03 10.0.0.83:6443 check
EOF

5.1.3配置keepalived master节点

#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
 router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
 script "/etc/keepalived/check_apiserver.sh"
 interval 5 
 weight -5
 fall 2
 rise 1
}
vrrp_instance VI_1 {
 state MASTER
 interface ens160
 mcast_src_ip 10.0.0.81
 virtual_router_id 51
 priority 100
 nopreempt
 advert_int 2
 authentication {
 auth_type PASS
 auth_pass K8SHA_KA_AUTH
 }
 virtual_ipaddress {
 10.0.0.89
 }
 track_script {
 chk_apiserver 
} }

EOF

5.1.4lb02配置keepalived backup节点

# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
 router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
 script "/etc/keepalived/check_apiserver.sh"
 interval 5 
 weight -5
 fall 2
 rise 1

}
vrrp_instance VI_1 {
 state BACKUP
 interface ens160
 mcast_src_ip 192.168.1.82
 virtual_router_id 51
 priority 50
 nopreempt
 advert_int 2
 authentication {
 auth_type PASS
 auth_pass K8SHA_KA_AUTH
 }
 virtual_ipaddress {
 10.0.0.89
 }
 track_script {
 chk_apiserver 
} }

EOF

5.1.5配置keepalived backup节点

# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
 router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
 script "/etc/keepalived/check_apiserver.sh"
 interval 5 
 weight -5
 fall 2
 rise 1

}
vrrp_instance VI_1 {
 state BACKUP
 interface ens160
 mcast_src_ip 192.168.1.83
 virtual_router_id 51
 priority 30
 nopreempt
 advert_int 2
 authentication {
 auth_type PASS
 auth_pass K8SHA_KA_AUTH
 }
 virtual_ipaddress {
 10.0.0.89
 }
 track_script {
 chk_apiserver 
} }

EOF

5.1.5健康检查脚本配置

cat > /etc/keepalived/check_apiserver.sh << EOF
#!/bin/bash

err=0
for k in \$(seq 1 3)
do
 check_code=\$(pgrep haproxy)
 if [[ \$check_code == "" ]]; then
 err=\$(expr \$err + 1)
 sleep 1
 continue
 else
 err=0
 break
 fi
done

if [[ \$err != "0" ]]; then
 echo "systemctl stop keepalived"
 /usr/bin/systemctl stop keepalived
 exit 1
else
 exit 0
fi
EOF
# 给脚本授权
chmod +x /etc/keepalived/check_apiserver.sh

5.1.6启动服务

systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived

5.1.7测试高可用

# 能ping同
[root@k8s-node02 ~]# ping 10.0.0.89
# 能telnet访问
[root@k8s-node02 ~]# telnet 10.0.0.89 8443
# 关闭主节点,看vip是否漂移到备节点

6.k8s组件配置(区别于第4点)

所有k8s节点创建以下目录

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

6.1.创建apiserver(所有master节点)

6.1.1master01节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
 --v=2 \
 --logtostderr=true \
 --allow-privileged=true \
 --bind-address=0.0.0.0 \
 --secure-port=6443 \
 --insecure-port=0 \
 --advertise-address=10.0.0.81 \
 --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \
 --feature-gates=IPv6DualStack=true \
 --service-node-port-range=30000-32767 \
 --etcd-servers=https://10.0.0.81:2379,https://10.0.0.82:2379,https://10.0.0.83:2379 \
 --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
 --etcd-certfile=/etc/etcd/ssl/etcd.pem \
 --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
 --client-ca-file=/etc/kubernetes/pki/ca.pem \
 --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
 --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
 --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
 --service-account-key-file=/etc/kubernetes/pki/sa.pub \
 --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
 --service-account-issuer=https://kubernetes.default.svc.cluster.local \
 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
 --authorization-mode=Node,RBAC \
 --enable-bootstrap-token-auth=true \
 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
 --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
 --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
 --requestheader-allowed-names=aggregator \
 --requestheader-group-headers=X-Remote-Group \
 --requestheader-extra-headers-prefix=X-Remote-Extra- \
 --requestheader-username-headers=X-Remote-User \
 --enable-aggregator-routing=true
 # --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

EOF

6.1.2master02节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
 --v=2 \
 --logtostderr=true \
 --allow-privileged=true \
 --bind-address=0.0.0.0 \
 --secure-port=6443 \
 --insecure-port=0 \
 --advertise-address=10.0.0.82 \
 --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \
 --feature-gates=IPv6DualStack=true \
 --service-node-port-range=30000-32767 \
 --etcd-servers=https://10.0.0.81:2379,https://10.0.0.82:2379,https://10.0.0.83:2379 \
 --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
 --etcd-certfile=/etc/etcd/ssl/etcd.pem \
 --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
 --client-ca-file=/etc/kubernetes/pki/ca.pem \
 --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
 --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
 --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
 --service-account-key-file=/etc/kubernetes/pki/sa.pub \
 --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
 --service-account-issuer=https://kubernetes.default.svc.cluster.local \
 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
 --authorization-mode=Node,RBAC \
 --enable-bootstrap-token-auth=true \
 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
 --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
 --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
 --requestheader-allowed-names=aggregator \
 --requestheader-group-headers=X-Remote-Group \
 --requestheader-extra-headers-prefix=X-Remote-Extra- \
 --requestheader-username-headers=X-Remote-User \
 --enable-aggregator-routing=true
 # --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

EOF

6.1.3master03节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
 --v=2 \
 --logtostderr=true \
 --allow-privileged=true \
 --bind-address=0.0.0.0 \
 --secure-port=6443 \
 --insecure-port=0 \
 --advertise-address=10.0.0.83 \
 --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \
 --feature-gates=IPv6DualStack=true \
 --service-node-port-range=30000-32767 \
 --etcd-servers=https://10.0.0.81:2379,https://10.0.0.82:2379,https://10.0.0.83:2379 \
 --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
 --etcd-certfile=/etc/etcd/ssl/etcd.pem \
 --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
 --client-ca-file=/etc/kubernetes/pki/ca.pem \
 --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
 --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
 --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
 --service-account-key-file=/etc/kubernetes/pki/sa.pub \
 --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
 --service-account-issuer=https://kubernetes.default.svc.cluster.local \
 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
 --authorization-mode=Node,RBAC \
 --enable-bootstrap-token-auth=true \
 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
 --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
 --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
 --requestheader-allowed-names=aggregator \
 --requestheader-group-headers=X-Remote-Group \
 --requestheader-extra-headers-prefix=X-Remote-Extra- \
 --requestheader-username-headers=X-Remote-User \
 --enable-aggregator-routing=true
 # --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

EOF

6.1.4启动apiserver(所有master节点)

systemctl daemon-reload && systemctl enable --now kube-apiserver
# 注意查看状态是否启动正常
systemctl status kube-apiserver

6.2.配置kube-controller-manager service

所有master节点配置,且配置相同
172.16.0.0/12为pod网段,按需求设置你自己的网段
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
 --v=2 \
 --logtostderr=true \
 --address=127.0.0.1 \
 --root-ca-file=/etc/kubernetes/pki/ca.pem \
 --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
 --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
 --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
 --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
 --leader-elect=true \
 --use-service-account-credentials=true \
 --node-monitor-grace-period=40s \
 --node-monitor-period=5s \
 --pod-eviction-timeout=2m0s \
 --controllers=*,bootstrapsigner,tokencleaner \
 --allocate-node-cidrs=true \
 --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \
 --cluster-cidr=172.16.0.0/12,fc00::/48 \
 --node-cidr-mask-size-ipv4=24 \
 --node-cidr-mask-size-ipv6=64 \
 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
 --feature-gates=IPv6DualStack=true

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF

6.2.1启动kube-controller-manager,并查看状态

systemctl daemon-reload
systemctl enable --now kube-controller-manager
systemctl status kube-controller-manager

6.3.配置kube-scheduler service

6.3.1所有master节点配置,且配置相同

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
 --v=2 \
 --logtostderr=true \
 --address=127.0.0.1 \
 --leader-elect=true \
 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF

6.3.2启动并查看服务状态

systemctl daemon-reload
systemctl enable --now kube-scheduler
systemctl status kube-scheduler

7.TLS Bootstrapping配置

7.1在master01上配置

cd /root/Kubernetes/bootstrap
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true --server=https://10.0.0.89:8443 \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-credentials tls-bootstrap-token-user \
--token=c8ad9c.2e4d610cf3e7426e \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-context tls-bootstrap-token-user@kubernetes \
--cluster=kubernetes \
--user=tls-bootstrap-token-user \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config use-context tls-bootstrap-token-user@kubernetes \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
# token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改
mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

7.2查看集群状态,没问题的话继续后续操作

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok 
controller-manager Healthy ok 
etcd-0 Healthy {"health":"true","reason":""} 
etcd-2 Healthy {"health":"true","reason":""} 
etcd-1 Healthy {"health":"true","reason":""} 
kubectl create -f bootstrap.secret.yaml

8.node节点配置

8.1.在master01上将证书复制到node节点

cd /etc/kubernetes/
 
for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done

8.2.kubelet配置

8.2.1所有k8s节点创建相关目录

mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
所有k8s节点配置kubelet service
cat > /usr/lib/systemd/system/kubelet.service << EOF

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \
 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \
 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
 --config=/etc/kubernetes/kubelet-conf.yml \
 --network-plugin=cni \
 --cni-conf-dir=/etc/cni/net.d \
 --cni-bin-dir=/opt/cni/bin \
 --container-runtime=remote \
 --runtime-request-timeout=15m \
 --container-runtime-endpoint=unix:///run/containerd/containerd.sock \
 --cgroup-driver=systemd \
 --node-labels=node.kubernetes.io/node= \
 --feature-gates=IPv6DualStack=true

Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

注意若是CentOS7,将 --node-labels=node.kubernetes.io/node='' 替换为 --node-labels=node.kubernetes.io/node='' 删除

8.2.2所有k8s节点创建kubelet的配置文件

cat > /etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
 anonymous:
 enabled: false
 webhook:
 cacheTTL: 2m0s
 enabled: true
 x509:
 clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
 mode: Webhook
 webhook:
 cacheAuthorizedTTL: 5m0s
 cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
 imagefs.available: 15%
 memory.available: 100Mi
 nodefs.available: 10%
 nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF

8.2.3启动kubelet

systemctl daemon-reload
systemctl restart kubelet
systemctl enable --now kubelet

8.2.4查看集群

[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready <none> 2m33s v1.23.7
k8s-master02 Ready <none> 2m26s v1.23.7
k8s-master03 Ready <none> 2m27s v1.23.7
k8s-node01 Ready <none> 21s v1.23.7
k8s-node02 Ready <none> 21s v1.23.7
[root@k8s-master01 ~]#

8.3.kube-proxy配置

8.3.1此配置只在master01操作

cd /root/Kubernetes/
kubectl -n kube-system create serviceaccount kube-proxy
kubectl create clusterrolebinding system:kube-proxy \
--clusterrole system:node-proxier \
--serviceaccount kube-system:kube-proxy
SECRET=$(kubectl -n kube-system get sa/kube-proxy \
 --output=jsonpath='{.secrets[0].name}')
JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)
PKI_DIR=/etc/kubernetes/pki
K8S_DIR=/etc/kubernetes
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.0.0.89:8443 \
--kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig
kubectl config set-credentials kubernetes \
--token=${JWT_TOKEN} \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=kubernetes \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config use-context kubernetes \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

8.3.2将kubeconfig发送至其他节点

for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done
for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done

8.3.3所有k8s节点添加kube-proxy的配置和service文件

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \
 --config=/etc/kubernetes/kube-proxy.yaml \
 --v=2

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF
cat > /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
 acceptContentTypes: ""
 burst: 10
 contentType: application/vnd.kubernetes.protobuf
 kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
 qps: 5
clusterCIDR: 172.16.0.0/12,fc00::/48 
configSyncPeriod: 15m0s
conntrack:
 max: null
 maxPerCore: 32768
 min: 131072
 tcpCloseWaitTimeout: 1h0m0s
 tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
 masqueradeAll: false
 masqueradeBit: 14
 minSyncPeriod: 0s
 syncPeriod: 30s
ipvs:
 masqueradeAll: true
 minSyncPeriod: 5s
 scheduler: "rr"
 syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms

EOF

8.3.4启动kube-proxy

 systemctl daemon-reload
 systemctl enable --now kube-proxy

9.安装Calico

9.1以下步骤只在master01操作

9.1.1更改calico网段

# vim calico.yaml
vim calico-ipv6.yaml
# calico-config ConfigMap处
 "ipam": {
 "type": "calico-ipam",
 "assign_ipv4": "true",
 "assign_ipv6": "true"
 },
 - name: IP
 value: "autodetect"
 - name: IP6
 value: "autodetect"
 - name: CALICO_IPV4POOL_CIDR
 value: "172.16.0.0/16"
 - name: CALICO_IPV6POOL_CIDR
 value: "fc00::/48"
 - name: FELIX_IPV6SUPPORT
 value: "true"
# kubectl apply -f calico.yaml
kubectl apply -f calico-ipv6.yaml

9.1.2查看容器状态

[root@k8s-master01 ~]# kubectl get pod -A
kube-system calico-kube-controllers-6f6595874c-2j9l8 1/1 Running 0 67s
kube-system calico-node-k8rb4 1/1 Running 0 67s
kube-system calico-node-lkcls 1/1 Running 0 67s
kube-system calico-node-mr8rl 1/1 Running 0 67s
kube-system calico-node-q84pm 1/1 Running 0 67s
kube-system calico-node-v7dwl 1/1 Running 0 67s
kube-system calico-typha-6b6cf8cbdf-cb2wl 1/1 Running 0 67s
[root@k8s-master01 ~]# 

10.安装CoreDNS

10.1以下步骤只在master01操作

10.1.1修改文件

cd CoreDNS/
sed -i "s#KUBEDNS_SERVICE_IP#10.96.0.10#g" coredns.yaml
cat coredns.yaml | grep clusterIP:
 clusterIP: 10.96.0.10 

10.1.2安装

kubectl create -f coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

11.安装Metrics Server

11.1以下步骤只在master01操作

11.1.1安装Metrics-server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率

安装metrics server
cd metrics-server/
kubectl create -f . 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

11.1.2稍等片刻查看状态

kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 
k8s-master01 154m 1% 1715Mi 21% 
k8s-master02 151m 1% 1274Mi 16% 
k8s-master03 523m 6% 1345Mi 17% 
k8s-node01 84m 1% 671Mi 8% 
k8s-node02 73m 0% 727Mi 9% 

12.集群验证

12.1部署pod资源

cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
 name: busybox
 namespace: default
spec:
 containers:
 - name: busybox
 image: busybox:1.28
 command:
 - sleep
 - "3600"
 imagePullPolicy: IfNotPresent
 restartPolicy: Always
EOF
# 查看
kubectl get pod
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 17s

12.2用pod解析默认命名空间中的kubernetes

kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
kubectl exec busybox -n default -- nslookup kubernetes
3Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

12.3测试跨命名空间是否可以解析

kubectl exec busybox -n default -- nslookup kube-dns.kube-system
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

12.4每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53

telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.
 telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'.
curl 10.96.0.10:53
curl: (52) Empty reply from server

12.5Pod和Pod之前要能通

kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 17m 172.27.14.193 k8s-node02 <none> <none>
 kubectl get po -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-5dffd5886b-4blh6 1/1 Running 0 77m 172.25.244.193 k8s-master01 <none> <none>
calico-node-fvbdq 1/1 Running 1 (75m ago) 77m 10.0.0.81 k8s-master01 <none> <none>
calico-node-g8nqd 1/1 Running 0 77m 10.0.0.84 k8s-node01 <none> <none>
calico-node-mdps8 1/1 Running 0 77m 10.0.0.85 k8s-node02 <none> <none>
calico-node-nf4nt 1/1 Running 0 77m 10.0.0.83 k8s-master03 <none> <none>
calico-node-sq2ml 1/1 Running 0 77m 10.0.0.82 k8s-master02 <none> <none>
calico-typha-8445487f56-mg6p8 1/1 Running 0 77m 10.0.0.85 k8s-node02 <none> <none>
calico-typha-8445487f56-pxbpj 1/1 Running 0 77m 10.0.0.81 k8s-master01 <none> <none>
calico-typha-8445487f56-tnssl 1/1 Running 0 77m 10.0.0.84 k8s-node01 <none> <none>
coredns-5db5696c7-67h79 1/1 Running 0 63m 172.25.92.65 k8s-master02 <none> <none>
metrics-server-6bf7dcd649-5fhrw 1/1 Running 0 61m 172.18.195.1 k8s-master03 <none> <none>
# 进入busybox ping其他节点上的pod
kubectl exec -ti busybox -- sh
/ # ping 10.0.0.84
PING 10.0.0.84 (10.0.0.84): 56 data bytes
64 bytes from 10.0.0.84: seq=0 ttl=63 time=0.358 ms
64 bytes from 10.0.0.84: seq=1 ttl=63 time=0.668 ms
64 bytes from 10.0.0.84: seq=2 ttl=63 time=0.637 ms
64 bytes from 10.0.0.84: seq=3 ttl=63 time=0.624 ms
64 bytes from 10.0.0.84: seq=4 ttl=63 time=0.907 ms
# 可以连通证明这个pod是可以跨命名空间和跨主机通信的

12.6创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)

cat > deployments.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
 name: nginx-deployment
 labels:
 app: nginx
spec:
 replicas: 3
 selector:
 matchLabels:
 app: nginx
 template:
 metadata:
 labels:
 app: nginx
 spec:
 containers:
 - name: nginx
 image: nginx:1.14.2
 ports:
 - containerPort: 80

EOF
kubectl apply -f deployments.yaml 
deployment.apps/nginx-deployment created
kubectl get pod 
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 6m25s
nginx-deployment-9456bbbf9-4bmvk 1/1 Running 0 8s
nginx-deployment-9456bbbf9-9rcdk 1/1 Running 0 8s
nginx-deployment-9456bbbf9-dqv8s 1/1 Running 0 8s
# 删除nginx
[root@k8s-master01 ~]# kubectl delete -f deployments.yaml 

13.安装dashboard

cd dashboard/
kubectl create -f .
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

13.1创建管理员用户

cat > admin.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
 name: admin-user

 namespace: kube-system
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: 
 name: admin-user
 annotations:
 rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:

- kind: ServiceAccount
 name: admin-user
 namespace: kube-system

EOF

13.2执行yaml文件

kubectl apply -f admin.yaml -n kube-system
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

13.3更改dashboard的svc为NodePort,如果已是请忽略

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
 type: NodePort

13.4查看端口号

kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.98.201.22 <none> 443:31245/TCP 10m

13.5查看token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print 1ドル}')
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.107.222.221 <none> 443:31824/TCP 30s
[root@k8s-master01 dashboard]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print 1ドル}')
Name: admin-user-token-qfmf6
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
 kubernetes.io/service-account.uid: 944c5bf6-059a-4e5e-8b18-462956dd0466
Type: kubernetes.io/service-account-token
Data
====
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjZ3RFhuV1U5dmgwbmJESEdHYjR2SUNHeWFjdzMwWnRUb2pYZXNGNUlrb2cifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXFmbWY2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5NDRjNWJmNi0wNTlhLTRlNWUtOGIxOC00NjI5NTZkZDA0NjYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.I7QJPtETU3FuIqQEMyq7T0yC7wsr7-mQSlcxr4qBl3JKqODOdp2t3wBgpE76rb8eHtFb7PdJa5-hmsyDPbtOwXoji2RKkt_ODBX0hpc-cS8OC4VFgTTnQVNTObLn1nP2sPTlAMJHkN5gza1W-lJanpXDwm-pzxUfVyoBn0a_AWwCc7AamhFrkSGHwEyoOFiN7-UuLAfWnjJtiiWWhugQbduzhvO78QGCWHGwewpSZ74qzfZgQftchXhJa_284_L9LhIFyh8qin5eoRZUi0ALz3wHsGD0L8hMXWqqSnHPswO3SkBRUMtt9CNCqDpH9WeNMNyut7m5MAg5I0nWMhggJQ
ca.crt: 1363 bytes

13.6登录dashboard

https://10.0.0.81:31245/

eyJhbGciOiJSUzI1NiIsImtpZCI6IjZ3RFhuV1U5dmgwbmJESEdHYjR2SUNHeWFjdzMwWnRUb2pYZXNGNUlrb2cifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXFmbWY2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5NDRjNWJmNi0wNTlhLTRlNWUtOGIxOC00NjI5NTZkZDA0NjYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.I7QJPtETU3FuIqQEMyq7T0yC7wsr7-mQSlcxr4qBl3JKqODOdp2t3wBgpE76rb8eHtFb7PdJa5-hmsyDPbtOwXoji2RKkt_ODBX0hpc-cS8OC4VFgTTnQVNTObLn1nP2sPTlAMJHkN5gza1W-lJanpXDwm-pzxUfVyoBn0a_AWwCc7AamhFrkSGHwEyoOFiN7-UuLAfWnjJtiiWWhugQbduzhvO78QGCWHGwewpSZ74qzfZgQftchXhJa_284_L9LhIFyh8qin5eoRZUi0ALz3wHsGD0L8hMXWqqSnHPswO3SkBRUMtt9CNCqDpH9WeNMNyut7m5MAg5I0nWMhggJQ

14.ingress安装

14.1写入配置文件,并执行

[root@hello ~/yaml]# vim deploy.yaml
[root@hello ~/yaml]#
[root@hello ~/yaml]#
[root@hello ~/yaml]# cat deploy.yaml
apiVersion: v1
kind: Namespace
metadata:
 name: ingress-nginx
 labels:
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: controller
 name: ingress-nginx
 namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: controller
 name: ingress-nginx-controller
 namespace: ingress-nginx
data:
 allow-snippet-annotations: 'true'
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 name: ingress-nginx
rules:
 - apiGroups:
 - ''
 resources:
 - configmaps
 - endpoints
 - nodes
 - pods
 - secrets
 - namespaces
 verbs:
 - list
 - watch
 - apiGroups:
 - ''
 resources:
 - nodes
 verbs:
 - get
 - apiGroups:
 - ''
 resources:
 - services
 verbs:
 - get
 - list
 - watch
 - apiGroups:
 - networking.k8s.io
 resources:
 - ingresses
 verbs:
 - get
 - list
 - watch
 - apiGroups:
 - ''
 resources:
 - events
 verbs:
 - create
 - patch
 - apiGroups:
 - networking.k8s.io
 resources:
 - ingresses/status
 verbs:
 - update
 - apiGroups:
 - networking.k8s.io
 resources:
 - ingressclasses
 verbs:
 - get
 - list
 - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 name: ingress-nginx
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: ingress-nginx
subjects:
 - kind: ServiceAccount
 name: ingress-nginx
 namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: controller
 name: ingress-nginx
 namespace: ingress-nginx
rules:
 - apiGroups:
 - ''
 resources:
 - namespaces
 verbs:
 - get
 - apiGroups:
 - ''
 resources:
 - configmaps
 - pods
 - secrets
 - endpoints
 verbs:
 - get
 - list
 - watch
 - apiGroups:
 - ''
 resources:
 - services
 verbs:
 - get
 - list
 - watch
 - apiGroups:
 - networking.k8s.io
 resources:
 - ingresses
 verbs:
 - get
 - list
 - watch
 - apiGroups:
 - networking.k8s.io
 resources:
 - ingresses/status
 verbs:
 - update
 - apiGroups:
 - networking.k8s.io
 resources:
 - ingressclasses
 verbs:
 - get
 - list
 - watch
 - apiGroups:
 - ''
 resources:
 - configmaps
 resourceNames:
 - ingress-controller-leader
 verbs:
 - get
 - update
 - apiGroups:
 - ''
 resources:
 - configmaps
 verbs:
 - create
 - apiGroups:
 - ''
 resources:
 - events
 verbs:
 - create
 - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: controller
 name: ingress-nginx
 namespace: ingress-nginx
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: Role
 name: ingress-nginx
subjects:
 - kind: ServiceAccount
 name: ingress-nginx
 namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: controller
 name: ingress-nginx-controller-admission
 namespace: ingress-nginx
spec:
 type: ClusterIP
 ports:
 - name: https-webhook
 port: 443
 targetPort: webhook
 appProtocol: https
 selector:
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
 annotations:
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: controller
 name: ingress-nginx-controller
 namespace: ingress-nginx
spec:
 type: NodePort
 externalTrafficPolicy: Local
 ipFamilyPolicy: SingleStack
 ipFamilies:
 - IPv4
 ports:
 - name: http
 port: 80
 protocol: TCP
 targetPort: http
 appProtocol: http
 - name: https
 port: 443
 protocol: TCP
 targetPort: https
 appProtocol: https
 selector:
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: controller
 name: ingress-nginx-controller
 namespace: ingress-nginx
spec:
 selector:
 matchLabels:
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/component: controller
 revisionHistoryLimit: 10
 minReadySeconds: 0
 template:
 metadata:
 labels:
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/component: controller
 spec:
 dnsPolicy: ClusterFirst
 containers:
 - name: controller
 image: registry.cn-hangzhou.aliyuncs.com/chenby/controller:v1.1.3 
 imagePullPolicy: IfNotPresent
 lifecycle:
 preStop:
 exec:
 command:
 - /wait-shutdown
 args:
 - /nginx-ingress-controller
 - --election-id=ingress-controller-leader
 - --controller-class=k8s.io/ingress-nginx
 - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
 - --validating-webhook=:8443
 - --validating-webhook-certificate=/usr/local/certificates/cert
 - --validating-webhook-key=/usr/local/certificates/key
 securityContext:
 capabilities:
 drop:
 - ALL
 add:
 - NET_BIND_SERVICE
 runAsUser: 101
 allowPrivilegeEscalation: true
 env:
 - name: POD_NAME
 valueFrom:
 fieldRef:
 fieldPath: metadata.name
 - name: POD_NAMESPACE
 valueFrom:
 fieldRef:
 fieldPath: metadata.namespace
 - name: LD_PRELOAD
 value: /usr/local/lib/libmimalloc.so
 livenessProbe:
 failureThreshold: 5
 httpGet:
 path: /healthz
 port: 10254
 scheme: HTTP
 initialDelaySeconds: 10
 periodSeconds: 10
 successThreshold: 1
 timeoutSeconds: 1
 readinessProbe:
 failureThreshold: 3
 httpGet:
 path: /healthz
 port: 10254
 scheme: HTTP
 initialDelaySeconds: 10
 periodSeconds: 10
 successThreshold: 1
 timeoutSeconds: 1
 ports:
 - name: http
 containerPort: 80
 protocol: TCP
 - name: https
 containerPort: 443
 protocol: TCP
 - name: webhook
 containerPort: 8443
 protocol: TCP
 volumeMounts:
 - name: webhook-cert
 mountPath: /usr/local/certificates/
 readOnly: true
 resources:
 requests:
 cpu: 100m
 memory: 90Mi
 nodeSelector:
 kubernetes.io/os: linux
 serviceAccountName: ingress-nginx
 terminationGracePeriodSeconds: 300
 volumes:
 - name: webhook-cert
 secret:
 secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: controller
 name: nginx
 namespace: ingress-nginx
spec:
 controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: admission-webhook
 name: ingress-nginx-admission
webhooks:
 - name: validate.nginx.ingress.kubernetes.io
 matchPolicy: Equivalent
 rules:
 - apiGroups:
 - networking.k8s.io
 apiVersions:
 - v1
 operations:
 - CREATE
 - UPDATE
 resources:
 - ingresses
 failurePolicy: Fail
 sideEffects: None
 admissionReviewVersions:
 - v1
 clientConfig:
 service:
 namespace: ingress-nginx
 name: ingress-nginx-controller-admission
 path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
 name: ingress-nginx-admission
 namespace: ingress-nginx
 annotations:
 helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
 helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 name: ingress-nginx-admission
 annotations:
 helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
 helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: admission-webhook
rules:
 - apiGroups:
 - admissionregistration.k8s.io
 resources:
 - validatingwebhookconfigurations
 verbs:
 - get
 - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: ingress-nginx-admission
 annotations:
 helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
 helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: admission-webhook
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: ingress-nginx-admission
subjects:
 - kind: ServiceAccount
 name: ingress-nginx-admission
 namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
 name: ingress-nginx-admission
 namespace: ingress-nginx
 annotations:
 helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
 helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: admission-webhook
rules:
 - apiGroups:
 - ''
 resources:
 - secrets
 verbs:
 - get
 - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
 name: ingress-nginx-admission
 namespace: ingress-nginx
 annotations:
 helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
 helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: admission-webhook
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: Role
 name: ingress-nginx-admission
subjects:
 - kind: ServiceAccount
 name: ingress-nginx-admission
 namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
 name: ingress-nginx-admission-create
 namespace: ingress-nginx
 annotations:
 helm.sh/hook: pre-install,pre-upgrade
 helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: admission-webhook
spec:
 template:
 metadata:
 name: ingress-nginx-admission-create
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: admission-webhook
 spec:
 containers:
 - name: create
 image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1 
 imagePullPolicy: IfNotPresent
 args:
 - create
 - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
 - --namespace=$(POD_NAMESPACE)
 - --secret-name=ingress-nginx-admission
 env:
 - name: POD_NAMESPACE
 valueFrom:
 fieldRef:
 fieldPath: metadata.namespace
 securityContext:
 allowPrivilegeEscalation: false
 restartPolicy: OnFailure
 serviceAccountName: ingress-nginx-admission
 nodeSelector:
 kubernetes.io/os: linux
 securityContext:
 runAsNonRoot: true
 runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
 name: ingress-nginx-admission-patch
 namespace: ingress-nginx
 annotations:
 helm.sh/hook: post-install,post-upgrade
 helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: admission-webhook
spec:
 template:
 metadata:
 name: ingress-nginx-admission-patch
 labels:
 helm.sh/chart: ingress-nginx-4.0.10
 app.kubernetes.io/name: ingress-nginx
 app.kubernetes.io/instance: ingress-nginx
 app.kubernetes.io/version: 1.1.0
 app.kubernetes.io/managed-by: Helm
 app.kubernetes.io/component: admission-webhook
 spec:
 containers:
 - name: patch
 image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1 
 imagePullPolicy: IfNotPresent
 args:
 - patch
 - --webhook-name=ingress-nginx-admission
 - --namespace=$(POD_NAMESPACE)
 - --patch-mutating=false
 - --secret-name=ingress-nginx-admission
 - --patch-failure-policy=Fail
 env:
 - name: POD_NAMESPACE
 valueFrom:
 fieldRef:
 fieldPath: metadata.namespace
 securityContext:
 allowPrivilegeEscalation: false
 restartPolicy: OnFailure
 serviceAccountName: ingress-nginx-admission
 nodeSelector:
 kubernetes.io/os: linux
 securityContext:
 runAsNonRoot: true
 runAsUser: 2000
[root@hello ~/yaml]#

14.2启用后端,写入配置文件执行

[root@hello ~/yaml]# vim backend.yaml
[root@hello ~/yaml]# cat backend.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
 name: default-http-backend
 labels:
 app.kubernetes.io/name: default-http-backend
 namespace: kube-system
spec:
 replicas: 1
 selector:
 matchLabels:
 app.kubernetes.io/name: default-http-backend
 template:
 metadata:
 labels:
 app.kubernetes.io/name: default-http-backend
 spec:
 terminationGracePeriodSeconds: 60
 containers:
 - name: default-http-backend
 image: registry.cn-hangzhou.aliyuncs.com/chenby/defaultbackend-amd64:1.5 
 livenessProbe:
 httpGet:
 path: /healthz
 port: 8080
 scheme: HTTP
 initialDelaySeconds: 30
 timeoutSeconds: 5
 ports:
 - containerPort: 8080
 resources:
 limits:
 cpu: 10m
 memory: 20Mi
 requests:
 cpu: 10m
 memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
 name: default-http-backend
 namespace: kube-system
 labels:
 app.kubernetes.io/name: default-http-backend
spec:
 ports:
 - port: 80
 targetPort: 8080
 selector:
 app.kubernetes.io/name: default-http-backend
[root@hello ~/yaml]#

14.3安装测试应用

[root@hello ~/yaml]# vim ingress-demo-app.yaml
[root@hello ~/yaml]#
[root@hello ~/yaml]# cat ingress-demo-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
 name: hello-server
spec:
 replicas: 2
 selector:
 matchLabels:
 app: hello-server
 template:
 metadata:
 labels:
 app: hello-server
 spec:
 containers:
 - name: hello-server
 image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server
 ports:
 - containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
 labels:
 app: nginx-demo
 name: nginx-demo
spec:
 replicas: 2
 selector:
 matchLabels:
 app: nginx-demo
 template:
 metadata:
 labels:
 app: nginx-demo
 spec:
 containers:
 - image: nginx
 name: nginx
---
apiVersion: v1
kind: Service
metadata:
 labels:
 app: nginx-demo
 name: nginx-demo
spec:
 selector:
 app: nginx-demo
 ports:
 - port: 8000
 protocol: TCP
 targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
 labels:
 app: hello-server
 name: hello-server
spec:
 selector:
 app: hello-server
 ports:
 - port: 8000
 protocol: TCP
 targetPort: 9000
---
apiVersion: networking.k8s.io/v1
kind: Ingress 
metadata:
 name: ingress-host-bar
spec:
 ingressClassName: nginx
 rules:
 - host: "hello.chenby.cn"
 http:
 paths:
 - pathType: Prefix
 path: "/"
 backend:
 service:
 name: hello-server
 port:
 number: 8000
 - host: "demo.chenby.cn"
 http:
 paths:
 - pathType: Prefix
 path: "/nginx" 
 backend:
 service:
 name: nginx-demo
 port:
 number: 8000
[root@hello ~/yaml]#
[root@hello ~/yaml]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-demo-app <none> app.demo.com 10.0.0.81 80 20m
ingress-host-bar nginx hello.chenby.cn,demo.chenby.cn 10.0.0.81 80 2m17s
[root@hello ~/yaml]#

14.4执行部署

root@hello:~# kubectl apply -f deploy.yaml 
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
root@hello:~# 
root@hello:~# kubectl apply -f backend.yaml 
deployment.apps/default-http-backend created
service/default-http-backend created
root@hello:~# 
root@hello:~# kubectl apply -f ingress-demo-app.yaml 
deployment.apps/hello-server created
deployment.apps/nginx-demo created
service/nginx-demo created
service/hello-server created
ingress.networking.k8s.io/ingress-host-bar created
root@hello:~# 

14.5过滤查看ingress端口

[root@hello ~/yaml]# kubectl get svc -A | grep ingress
default ingress-demo-app ClusterIP 10.68.231.41 <none> 80/TCP 51m
ingress-nginx ingress-nginx-controller NodePort 10.68.93.71 <none> 80:32746/TCP,443:30538/TCP 32m
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.68.146.23 <none> 443/TCP 32m
[root@hello ~/yaml]#

15.安装命令行自动补全功能

yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

关于

https://www.oiox.cn/

https://www.oiox.cn/index.php/start-page.html

CSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客

全网可搜《小陈运维》

文章主要发布于微信公众号:《Linux运维交流社区》

Clone this wiki locally

AltStyle によって変換されたページ (->オリジナル) /