400-915-1135
集群式网络建站模板

集群式网络建站模板

发表日期:2023-01-24 14:55:29   作者来源:众诚企业建站   浏览:135

产业集群式什么意思?

产业集群是指在特定区域中,具有竞争与合作关系,且在地理上集中,有交互关联性的企业、专业化供应商、服务供应商、金融机构、相关产业的厂商及其他相关机构等组成的群体。不同产业集群的纵深程度和复杂性相异。代表着介于市场和等级制之间的一种新的空间经济组织形式。


建站宝盒是模板建站吗?建站宝盒是模板建站吗,还是智能建站系统呢?

模板建站和智能建站有啥区别?反正不是用技术原生建站就是了 这样的产品可定制性太低 原生建站就是任你行


网友:集群式网络建站模板

自建高可用k8s集群搭建

一、所有节点基础环境

192.168.0.x : 为机器的网段 10.96.0.0/16: 为Service网段 196.16.0.0/16: 为Pod网段

1、环境准备与内核升级

先升级所有机器内核

#我的机器版本cat /etc/redhat-release # CentOS Linux release 7.9.2009 (Core)#修改域名,一定不是localhosthostnamectl set-hostname k8s-xxx#集群规划k8s-master1 k8s-master2 k8s-master3 k8s-master-lb k8s-node01 k8s-node02 ... k8s-nodeN# 每个机器准备域名vi /etc/hosts192.168.0.10 k8s-master1192.168.0.11 k8s-master2192.168.0.12 k8s-master3192.168.0.13 k8s-node1192.168.0.14 k8s-node2192.168.0.15 k8s-node3192.168.0.250 k8s-master-lb # 非高可用,可以不用这个。这个使用keepalive配置

# 关闭selinuxsetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinuxsed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

# 关闭swapswapoff -a && sysctl -w vm.swappiness=0sed -ri 's/.*swap.*/#&/' /etc/fstab

#修改limitulimit -SHn 65535vi /etc/security/limits.conf# 末尾添加如下内容* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* soft memlock unlimited* hard memlock unlimited

#为了方便以后操作配置ssh免密连接,master1运行ssh-keygen -t rsafor i in k8s-master1 k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

#安装后续用的一些工具yum install wget git jq psmisc net-tools yum-utils device-mapper-persistent-data lvm2 -y

# 所有节点# 安装ipvs工具,方便以后操作ipvs,ipset,conntrack等yum install ipvsadm ipset sysstat conntrack libseccomp -y# 所有节点配置ipvs模块,执行以下命令,在内核4.19+版本改为nf_conntrack, 4.18下改为nf_conntrack_ipv4modprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack#修改ipvs配置,加入以下内容vi /etc/modules-load.d/ipvs.confip_vsip_vs_lcip_vs_wlcip_vs_rrip_vs_wrrip_vs_lblcip_vs_lblcrip_vs_dhip_vs_ship_vs_foip_vs_nqip_vs_sedip_vs_ftpip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipip# 执行命令systemctl enable --now systemd-modules-load.service #--now = enable+start#检测是否加载lsmod | grep -e ip_vs -e nf_conntrack

## 所有节点cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-ip6tables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1net.ipv4.conf.all.route_localnet = 1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16768net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16768EOFsysctl --system

# 所有节点配置完内核后,重启服务器,保证重启后内核依旧加载rebootlsmod | grep -e ip_vs -e nf_conntrack2、安装Docker

# 安装dockeryum remove docker*yum install -y yum-utilsyum-config-manager --add-repo install -y docker-ce-19.03.9 docker-ce-cli-19.03.9 containerd.io-1.4.4

# 修改docker配置,新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemdmkdir /etc/dockercat > /etc/docker/daemon.json <<EOF{ "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": [""]}EOFsystemctl daemon-reload && systemctl enable --now docker

#也可以自己下载rpm离线包进行安装 localinstall xxxx二、PKI

百度百科:公钥基础设施_百度百科

Kubernetes 需要 PKI 才能执行以下操作:

Kubelet 的客户端证书,用于 API 服务器身份验证API 服务器端点的证书集群管理员的客户端证书,用于 API 服务器身份认证API 服务器的客户端证书,用于和 Kubelet 的会话API 服务器的客户端证书,用于和 etcd 的会话控制器管理器的客户端证书/kubeconfig,用于和 API 服务器的会话调度器的客户端证书/kubeconfig,用于和 API 服务器的会话前端代理的客户端及服务端证书

说明: 只有当你运行 kube-proxy 并要支持扩展 API 服务器 时,才需要 front-proxy 证书

etcd 还实现了双向 TLS 来对客户端和对其他对等节点进行身份验证

PKI 证书和要求 | Kubernetes

三、证书工具准备

# 准备文件夹存放所有证书信息。看看kubeadm 如何组织有序的结构的# 三个节点都执行mkdir -p /etc/kubernetes/pki

1、下载证书工具

# 下载cfssl核心组件wget#授予执行权限chmod +x cfssl*#批量重命名for name in `ls cfssl*`; do mv $name ${name%_1.5.0_linux_amd64}; done#移动到文件mv cfssl* /usr/bin2、ca根配置

ca-config.json

mkdir -p /etc/kubernetes/pkicd /etc/kubernetes/pkivi ca-config.json

{"signing": {"default": {"expiry": "87600h"},"profiles": {"server": {"expiry": "87600h","usages": ["signing","key encipherment","server auth"]},"client": {"expiry": "87600h","usages": ["signing","key encipherment","client auth"]},"peer": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]},"kubernetes": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]},"etcd": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}}3、ca签名请求

CSR是Certificate Signing Request的英文缩写,即证书签名请求文件

ca-csr.json

vi /etc/kubernetes/pki/ca-csr.json

{ "CN": "kubernetes", "key": {"algo": "rsa","size": 2048 }, "names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes"} ], "ca": {"expiry": "87600h" }}CN(Common Name):

- 公用名(Common Name)必须填写,一般可以是网站域O(Organization):

- Organization(组织名)是必须填写的,如果申请的是OV、EV型证书,组织名称必须严格和企业在政府登记名称一致,一般需要和营业执照上的名称完全一致。不可以使用缩写或者商标。如果需要使用英文名称,需要有DUNS编码或者律师信证明。OU(Organization Unit)

- OU单位部门,这里一般没有太多限制,可以直接填写IT DEPT等皆可。C(City)

- City是指申请单位所在的城市。ST(State/Province)

- ST是指申请单位所在的省份。C(Country Name)

- C是指国家名称,这里用的是两位大写的国家代码,中国是CN。4、生成证书

生成ca证书和私钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -# ca.csr ca.pem(ca公钥) ca-key.pem(ca私钥,妥善保管)

5、k8s集群是如何使用证书的

参考官方文档:PKI 证书和要求 | Kubernetes

四、etcd高可用搭建1、etcd文档

etcd示例:Demo | etcd 参照示例学习etcd使用

etcd构建:Install | etcd 参照etcd-k8s集群量规划指南,大家参照这个标准建立集群

etcd部署:Operations guide | etcd 参照部署手册,学习etcd配置和集群部署

2、下载etcd

# 给所有master节点,发送etcd包准备部署etcd高可用wget ## 到其他节点for i in k8s-master1 k8s-master2 k8s-master3;do scp etcd-* root@$i:/root/;done## 解压到 /usr/local/bintar -zxvf etcd-v3.4.16-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.4.16-linux-amd64/etcd{,ctl}##验证etcdctl #只要有打印就ok3、etcd证书

Hardware recommendations | etcd安装参考 :Hardware recommendations | etcd

生成etcd证书

etcd-ca-csr.json

{ "CN": "etcd", "key": {"algo": "rsa","size": 2048 }, "names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "etcd","OU": "etcd"} ], "ca": {"expiry": "87600h" }}

# 生成etcd根ca证书cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/etcd/ca -

etcd-itdachang-csr.json

{"CN": "etcd-itdachang","key": {"algo": "rsa","size": 2048},"hosts": ["127.0.0.1","k8s-master1","k8s-master2","k8s-master3","192.168.0.10","192.168.0.11","192.168.0.12"],"names": [{"C": "CN","L": "beijing","O": "etcd","ST": "beijing","OU": "System"}]}// 注意:hosts用自己的主机名和ip// 也可以在签发的时候再加上 -hostname=127.0.0.1,k8s-master1,k8s-master2,k8s-master3,// 可以指定受信的主机列表//"hosts": [//"k8s-master1",//"www.example.net"//],

# 签发itdachang的etcd证书cfssl gencert -ca=/etc/kubernetes/pki/etcd/ca.pem -ca-key=/etc/kubernetes/pki/etcd/ca-key.pem -config=/etc/kubernetes/pki/ca-config.json -profile=etcd etcd-itdachang-csr.json | cfssljson -bare /etc/kubernetes/pki/etcd/etcd

# 把生成的etcd证书,给其他机器for i in k8s-master2 k8s-master3;do scp -r /etc/kubernetes/pki/etcd root@$i:/etc/kubernetes/pki;done4、etcd高可用安装

etcd配置文件示例: Configuration flags | etcd

etcd高可用安装示例: Clustering Guide | etcd

为了保证启动配置一致性,我们编写etcd配置文件,并将etcd做成service启动

# etcd yaml示例。# This is the configuration file for the etcd server.# Human-readable name for this member.name: 'default'# Path to the data directory.data-dir:# Path to the dedicated wal directory.wal-dir:# Number of committed transactions to trigger a snapshot to disk.snapshot-count: 10000# Time (in milliseconds) of a heartbeat interval.heartbeat-interval: 100# Time (in milliseconds) for an election to timeout.election-timeout: 1000# Raise alarms when backend size exceeds the given quota. 0 means use the# default quota.quota-backend-bytes: 0# List of comma separated URLs to listen on for peer traffic.listen-peer-urls: :2380# List of comma separated URLs to listen on for client traffic.listen-client-urls: :2379# Maximum number of snapshot files to retain (0 is unlimited).max-snapshots: 5# Maximum number of wal files to retain (0 is unlimited).max-wals: 5# Comma-separated white list of origins for CORS (cross-origin resource sharing).cors:# List of this member's peer URLs to advertise to the rest of the cluster.# The URLs needed to be a comma-separated list.initial-advertise-peer-urls: :2380# List of this member's client URLs to advertise to the public.# The URLs needed to be a comma-separated list.advertise-client-urls: :2379# Discovery URL used to bootstrap the cluster.discovery:# Valid values include 'exit', 'proxy'discovery-fallback: 'proxy'# HTTP proxy to use for traffic to discovery service.discovery-proxy:# DNS domain used to bootstrap initial cluster.discovery-srv:# Initial cluster configuration for bootstrapping.initial-cluster:# Initial cluster token for the etcd cluster during bootstrap.initial-cluster-token: 'etcd-cluster'# Initial cluster state ('new' or 'existing').initial-cluster-state: 'new'# Reject reconfiguration requests that would cause quorum loss.strict-reconfig-check: false# Accept etcd V2 client requestsenable-v2: true# Enable runtime profiling data via HTTP serverenable-pprof: true# Valid values include 'on', 'readonly', 'off'proxy: 'off'# Time (in milliseconds) an endpoint will be held in a failed state.proxy-failure-wait: 5000# Time (in milliseconds) of the endpoints refresh interval.proxy-refresh-interval: 30000# Time (in milliseconds) for a dial to timeout.proxy-dial-timeout: 1000# Time (in milliseconds) for a write to timeout.proxy-write-timeout: 5000# Time (in milliseconds) for a read to timeout.proxy-read-timeout: 0client-transport-security: # Path to the client server TLS cert file. cert-file: # Path to the client server TLS key file. key-file: # Enable client cert authentication. client-cert-auth: false # Path to the client server TLS trusted CA cert file. trusted-ca-file: # Client TLS using generated certificates auto-tls: falsepeer-transport-security: # Path to the peer server TLS cert file. cert-file: # Path to the peer server TLS key file. key-file: # Enable peer client cert authentication. client-cert-auth: false # Path to the peer server TLS trusted CA cert file. trusted-ca-file: # Peer TLS using generated certificates. auto-tls: false# Enable debug-level logging for etcd.debug: falselogger: zap# Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd.log-outputs: [stderr]# Force to create a new one member cluster.force-new-cluster: falseauto-compaction-mode: periodicauto-compaction-retention: "1"

三个etcd机器都创建 /etc/etcd 目录,准备存储etcd配置信息undefined

#三个master执行mkdir -p /etc/etcd

vi /etc/etcd/etcd.yaml

# 我们的yamlname: 'etcd-master3' #每个机器可以写自己的域名,不能重复data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: ':2380' # 本机ip+2380端口,代表和集群通信listen-client-urls: ':2379,:2379' #改为自己的max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: ':2380' #自己的ipadvertise-client-urls: ':2379' #自己的ipdiscovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'etcd-master1=:2380,etcd-master2=:2380,etcd-master3=:2380' #这里不一样initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/ca.pem' auto-tls: truepeer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/ca.pem' auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: false

三台机器的etcd做成service,开机启动undefined

vi /usr/lib/systemd/system/etcd.service[Unit]Description=Etcd ServiceDocumentation==network.tarGET@[Service]Type=notifyExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.yamlRestart=on-failureRestartSec=10LimitNOFILE=65536[Install]WantedBy=multi-user.targetAlias=etcd3.service

# 加载&开机启动systemctl daemon-reloadsystemctl enable --now etcd# 启动有问题,使用 journalctl -u 服务名排查journalctl -u etcd

测试etcd访问

# 查看etcd集群状态etcdctl --endpoints="192.168.0.10:2379,192.168.0.11:2379,192.168.0.12:2379" --cacert=/etc/kubernetes/pki/etcd/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table# 以后测试命令export ETCDCTL_API=3HOST_1=192.168.0.10HOST_2=192.168.0.11HOST_3=192.168.0.12ENDPOINTS=$HOST_1:2379,$HOST_2:2379,$HOST_3:2379## 导出环境变量,方便测试,参照 ETCDCTL_DIAL_TIMEOUT=3sexport ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.pemexport ETCDCTL_CERT=/etc/kubernetes/pki/etcd/etcd.pemexport ETCDCTL_KEY=/etc/kubernetes/pki/etcd/etcd-key.pemexport ETCDCTL_ENDPOINTS=$HOST_1:2379,$HOST_2:2379,$HOST_3:2379# 自动用环境变量定义的证书位置etcdctl member list --write-out=table#如果没有环境变量就需要如下方式调用etcdctl --endpoints=$ENDPOINTS --cacert=/etc/kubernetes/pki/etcd/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem member list --write-out=table## 更多etcdctl命令,#access-etcd五、k8s组件与证书1、K8s离线安装包

找到changelog对应版本

# 下载k8s包wget 、master节点准备

# 把kubernetes把给master所有节点for i in k8s-master1 k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3;do scp kubernetes-server-* root@$i:/root/;done

#所有master节点解压kubelet,kubectl等到 /usr/local/bin。tar -xvf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}#master需要全部组件,node节点只需要 /usr/local/bin kubelet、kube-proxy3、apiserver 证书生成

3.1、apiserver-csr.json

//10.96.0. 为service网段。可以自定义 如: 66.66.0.1// 192.168.0.250: 是我准备的负载均衡器地址(负载均衡可以自己搭建,也可以购买云厂商lb。){"CN": "kube-apiserver","hosts": ["10.96.0.1","127.0.0.1","192.168.0.250","192.168.0.10","192.168.0.11","192.168.0.12","192.168.0.13","192.168.0.14","192.168.0.15","192.168.0.16","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "Kubernetes","OU": "Kubernetes"}]}

3.2、生成apiserver证书

# 192.168.0.是k8s service的网段,如果说需要更改k8s service网段,那就需要更改192.168.0.1,# 如果不是高可用集群,10.103.236.236为Master01的IP#先生成CA机构vi ca-csr.json{ "CN": "kubernetes", "key": {"algo": "rsa","size": 2048 }, "names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes"} ], "ca": {"expiry": "87600h" }}cfssl gencert -initca ca-csr.json | cfssljson -bare ca -cfssl gencert-ca=/etc/kubernetes/pki/ca.pem-ca-key=/etc/kubernetes/pki/ca-key.pem-config=/etc/kubernetes/pki/ca-config.json-profile=kubernetesapiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver4、front-proxy证书生成

官方文档:配置聚合层 | Kubernetes

注意:front-proxy不建议用新的CA机构签发证书,可能导致通过他代理的组件如metrics-server权限不可用。 如果用新的,api-server配置添加 --requestheader-allowed-names=front-proxy-client

4.1、front-proxy-ca-csr.json

front-proxy根ca

vi front-proxy-ca-csr.json{ "CN": "kubernetes", "key": {"algo": "rsa","size": 2048 }}

#front-proxy 根ca生成cfssl gencert-initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca

4.2、front-proxy-client证书

vi front-proxy-client-csr.json #准备申请client客户端

{ "CN": "front-proxy-client", "key": {"algo": "rsa","size": 2048 }}

#生成front-proxy-client 证书cfssl gencert-ca=/etc/kubernetes/pki/front-proxy-ca.pem-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem-config=ca-config.json-profile=kubernetesfront-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client#忽略警告,毕竟我们不是给网站生成的5、controller-manage证书生成与配置

5.1、controller-manager-csr.json

vi controller-manager-csr.json

{ "CN": "system:kube-controller-manager", "key": {"algo": "rsa","size": 2048 }, "names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-controller-manager","OU": "Kubernetes"} ]}

5.2、生成证书

cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes controller-manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager

5.3、生成配置

# 注意,如果不是高可用集群,192.168.0.250:6443改为master01的地址,6443为apiserver的默认端口# set-cluster:设置一个集群项,kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=:6443 --kubeconfig=/etc/kubernetes/controller-manager.conf# 设置一个环境项,一个上下文kubectl config set-context system:kube-controller-manager@kubernetes --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=/etc/kubernetes/controller-manager.conf# set-credentials 设置一个用户项kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/pki/controller-manager.pem --client-key=/etc/kubernetes/pki/controller-manager-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/controller-manager.conf# 使用某个环境当做默认环境kubectl config use-context system:kube-controller-manager@kubernetes --kubeconfig=/etc/kubernetes/controller-manager.conf# 后来也用来自动批复kubelet证书6、scheduler证书生成与配置

6.1、scheduler-csr.json

vi scheduler-csr.json

{ "CN": "system:kube-scheduler", "key": {"algo": "rsa","size": 2048 }, "names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-scheduler","OU": "Kubernetes"} ]}

6.2、签发证书

cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=/etc/kubernetes/pki/ca-config.json -profile=kubernetes scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler

6.3、生成配置

# 注意,如果不是高可用集群,192.168.0.250:6443 改为master01的地址,6443是api-server默认端口kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=:6443 --kubeconfig=/etc/kubernetes/scheduler.confkubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/pki/scheduler.pem --client-key=/etc/kubernetes/pki/scheduler-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/scheduler.confkubectl config set-context system:kube-scheduler@kubernetes --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=/etc/kubernetes/scheduler.confkubectl config use-context system:kube-scheduler@kubernetes --kubeconfig=/etc/kubernetes/scheduler.conf#k8s集群安全操作相关7、admin证书生成与配置

7.1、admin-csr.json

vi admin-csr.json

{ "CN": "admin", "key": {"algo": "rsa","size": 2048 }, "names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:masters","OU": "Kubernetes"} ]}

7.2、生成证书

cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=/etc/kubernetes/pki/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin

7.3、生成配置

# 注意,如果不是高可用集群,192.168.0.250:6443改为master01的地址,6443为apiserver的默认端口kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=:6443 --kubeconfig=/etc/kubernetes/admin.confkubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.confkubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.confkubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.conf

kubelet将使用 bootstrap 引导机制,自动颁发证书,所以我们不用配置了。要不然,1万台机器,一个万kubelet,证书配置到明年去。。。

8、ServiceAccount Key生成

k8s底层,每创建一个ServiceAccount,都会分配一个Secret,而Secret里面有秘钥,秘钥就是由我们接下来的sa生成的。所以我们提前创建出sa信息

openssl genrsa -out /etc/kubernetes/pki/sa.key 2048openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub9、发送证书到其他节点

# 在master1上执行for NODE in k8s-master2 k8s-master3do for FILE in admin.conf controller-manager.conf scheduler.conf do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE} donedone六、高可用配置高可用配置

- 如果你不是在创建高可用集群,则无需配置haproxy和keepalived

- 高可用有很多可选方案

- nginx

- haproxy

- keepalived

- 云供应商提供的负载均衡产品云上安装注意事项

- 云上安装可以直接使用云上的lb,比如阿里云slb,腾讯云elb等

- 公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的。

- 阿里云的话,kubectl控制端不能放在master节点,推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。青云使用

- 创建负载均衡器,指定ip地址为我们之前的预留地址

- 进入负载均衡器,创建监听器

- 选择TCP,6443端口

- 添加后端服务器地址与端口

七、组件启动1、所有master执行

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes#三个master节点kube-xx相关的程序都在 /usr/local/binfor NODE in k8s-master2 k8s-master3do scp -r /etc/kubernetes/* root@$NODE:/etc/kubernetes/done

接下来把master1生成的所有证书全部发给master2,master3

2、配置apiserver服务

2.1、配置

所有Master节点创建kube-apiserver.service

注意,如果不是高可用集群,192.168.0.250改为master01的地址 以下文档使用的k8s service网段为10.96.0.0/16,该网段不能和宿主机的网段、Pod网段的重复 特别注意:docker的网桥默认为 172.17.0.1/16。不要使用这个网段

# 每个master节点都需要执行以下内容# --advertise-address: 需要改为本master节点的ip# --service-cluster-ip-range=10.96.0.0/16: 需要改为自己规划的service网段# --etcd-servers: 改为自己etcd-server的所有地址vi /usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation==network.tarGET@[Service]ExecStart=/usr/local/bin/kube-apiserver --v=2 --logtostderr=true --allow-privileged=true --bind-address=0.0.0.0 --secure-port=6443 --insecure-port=0 --advertise-address=192.168.0.10 --service-cluster-ip-range=10.96.0.0/16 --service-node-port-range=30000-32767 --etcd-servers=:2379,:2379,:2379 --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-account-issuer= --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota --authorization-mode=Node,RBAC --enable-bootstrap-token-auth=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem --requestheader-allowed-names=aggregator,front-proxy-client --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-username-headers=X-Remote-User# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.target

2.2、启动apiserver服务

systemctl daemon-reload && systemctl enable --now kube-apiserver#查看状态systemctl status kube-apiserver3、配置controller-manager服务

3.1、配置

所有Master节点配置kube-controller-manager.service

文档使用的k8s Pod网段为196.16.0.0/16,该网段不能和宿主机的网段、k8s Service网段的重复,请按需修改; 特别注意:docker的网桥默认为 172.17.0.1/16。不要使用这个网

# 所有节点执行vi /usr/lib/systemd/system/kube-controller-manager.service## --cluster-cidr=196.16.0.0/16 : 为Pod的网段。修改成自己想规划的网段[Unit]Description=Kubernetes Controller ManagerDocumentation==network.tarGET@[Service]ExecStart=/usr/local/bin/kube-controller-manager --v=2 --logtostderr=true --address=127.0.0.1 --root-ca-file=/etc/kubernetes/pki/ca.pem --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem --service-account-private-key-file=/etc/kubernetes/pki/sa.key --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --use-service-account-credentials=true --node-monitor-grace-period=40s --node-monitor-period=5s --pod-eviction-timeout=2m0s --controllers=*,bootstrapsigner,tokencleaner --allocate-node-cidrs=true --cluster-cidr=196.16.0.0/16 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem --node-cidr-mask-size=24Restart=alwaysRestartSec=10s[Install]WantedBy=multi-user.target

3.2、启动

# 所有master节点执行systemctl daemon-reloadsystemctl daemon-reload && systemctl enable --now kube-controller-managersystemctl status kube-controller-manager4、配置scheduler

4.1、配置

所有Master节点配置kube-scheduler.service

vi /usr/lib/systemd/system/kube-scheduler.service [Unit]Description=Kubernetes SchedulerDocumentation==network.tarGET@[Service]ExecStart=/usr/local/bin/kube-scheduler --v=2 --logtostderr=true --address=127.0.0.1 --leader-elect=true --kubeconfig=/etc/kubernetes/scheduler.confRestart=alwaysRestartSec=10s[Install]WantedBy=multi-user.target

4.2、启动

systemctl daemon-reloadsystemctl daemon-reload && systemctl enable --now kube-schedulersystemctl status kube-scheduler八、TLS与引导启动原理1、master1配置bootstrap

注意,如果不是高可用集群,192.168.0.250:6443改为master1的地址,6443为apiserver的默认端口

#准备一个随机token。但是我们只需要16个字符head -c 16 /dev/urandom | od -An -t x | tr -d ' '# 值如下: 737b177d9823531a433e368fcdb16f5f# 生成16个字符的head -c 8 /dev/urandom | od -An -t x | tr -d ' '# d683399b7a553977

#设置集群kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=:6443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf#设置秘钥kubectl config set-credentials tls-bootstrap-token-user --token=l6fy8c.d683399b7a553977 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf #设置上下文kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf#使用设置kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf2、master1设置kubectl执行权限

kubectl 能不能操作集群是看 /root/.kube 下有没有config文件,而config就是我们之前生成的admin.conf,具有操作权限的

# 只在master1生成,因为生产集群,我们只能让一台机器具有操作集群的权限,这样好控制mkdir -p /root/.kube ;cp /etc/kubernetes/admin.conf /root/.kube/config

#验证kubectl get nodes# 应该在网络里面开放负载均衡器的6443端口;默认应该不要配置的[root@k8s-master1 ~]# kubectl get nodesNo resources found#说明已经可以连接apiserver并获取资源3、创建集群引导权限文件

# master准备这个文件 vi /etc/kubernetes/bootstrap.secret.yamlapiVersion: v1kind: Secretmetadata: name: bootstrap-token-l6fy8c namespace: kube-systemtype: bootstrap.kubernetes.io/tokenstringData: description: "The default bootstrap token generated by 'kubelet '." token-id: l6fy8c token-secret: d683399b7a553977 usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress ---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubelet-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrappersubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-certificate-rotationroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: annotations:rbac.authorization.kubernetes.io/autoupdate: "true" labels:kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubeletrules: - apiGroups:- ""resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metricsverbs:- "*"---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: system:kube-apiserver namespace: ""roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubeletsubjects: - apiGroup: rbac.authorization.k8s.iokind: Username: kube-apiserver

# 应用此文件资源内容kubectl create -f /etc/kubernetes/bootstrap.secret.yaml九、引导Node节点启动

所有节点的kubelet需要我们引导启动

1、发送核心证书到节点

master1节点把核心证书发送到其他节点

cd /etc/kubernetes/ #查看所有信息#执行所有令牌操作for NODE in k8s-master2 k8s-master3 k8s-node1 k8s-node2; dossh $NODE mkdir -p /etc/kubernetes/pki/etcdfor FILE in ca.pem etcd.pem etcd-key.pem; doscp /etc/kubernetes/pki/etcd/$FILE $NODE:/etc/kubernetes/pki/etcd/donefor FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.conf; doscp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE} done done2、所有节点配置kubelet

# 所有节点创建相关目录mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/## 所有node节点必须有 kubelet kube-proxyfor NODE in k8s-master2 k8s-master3 k8s-node3 k8s-node1 k8s-node2; doscp -r /etc/kubernetes/* root@$NODE:/etc/kubernetes/ done

2.1、创建kubelet.service

#所有节点,配置kubelet服务vi /usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes KubeletDocumentation==docker.serviceRequires=docker.service[Service]ExecStart=/usr/local/bin/kubeletRestart=alwaysStartLimitInterval=0RestartSec=10[Install]WantedBy=multi-user.target

# 所有节点配置kubelet service配置文件vi /etc/systemd/system/kubelet.service.d/10-kubelet.conf[Service]Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/pause:3.4.1"Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "ExecStart=ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

2.2、创建kubelet-conf.yml文件

#所有节点,配置kubelet-conf文件vi /etc/kubernetes/kubelet-conf.yml# clusterDNS 为service网络的第10个ip值,改成自己的。如:10.96.0.10

apiVersion: kubelet.config.k8s.io/v1beta1kind: KubeletConfigurationaddress: 0.0.0.0port: 10250readOnlyPort: 10255authentication: anonymous:enabled: false webhook:cacheTTL: 2m0senabled: true x509:clientCAFile: /etc/kubernetes/pki/ca.pemauthorization: mode: Webhook webhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30scgroupDriver: systemdcgroupsPerQOS: trueclusterDNS:- 10.96.0.10clusterDomain: cluster.localcontainerLogMaxFiles: 5containerLogMaxSize: 10MicontentType: application/vnd.kubernetes.protobufcpuCFSQuota: truecpuManagerPolicy: nonecpuManagerReconcilePeriod: 10senableControllerAttachDetach: trueenableDebuggingHandlers: trueenforceNodeAllocatable:- podseventBurst: 10eventRecordQPS: 5evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5%evictionPressureTransitionPeriod: 5m0s #缩小相应的配置failSwapOn: truefileCheckFrequency: 20shairpinMode: promiscuous-bridgehealthzBindAddress: 127.0.0.1healthzPort: 10248httpCheckFrequency: 20simageGCHighThresholdPercent: 85imageGCLowThresholdPercent: 80imageMinimumGCAge: 2m0siptablesDropBit: 15iptablesMasqueradeBit: 14kubeAPIBurst: 10kubeAPIQPS: 5makeIPTablesUtilChains: truemaxOpenFiles: 1000000maxPods: 110nodeStatusUpdateFrequency: 10soomScoreAdj: -999podPidsLimit: -1registryBurst: 10registryPullQPS: 5resolvConf: /etc/resolv.confrotateCertificates: trueruntimeRequestTimeout: 2m0sserializeImagePulls: truestaticPodPath: /etc/kubernetes/manifestsstreamingConnectionIdleTimeout: 4h0m0ssyncFrequency: 1m0svolumeStatsAggPeriod: 1m0s

2.3、所有节点启动kubelet

systemctl daemon-reload && systemctl enable --now kubeletsystemctl status kubelet

会提示 "Unable to update cni config"。 接下来配置cni网络即可

3、kube-proxy配置

注意,如果不是高可用集群,192.168.0.250:6443改为master1的地址,6443改为apiserver的默认端口

3.1、生成kube-proxy.conf

以下操作在master1执行

#创建kube-proxy的sakubectl -n kube-system create serviceaccount kube-proxy#创建角色绑定kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy#导出变量,方便后面使用SECRET=$(kubectl -n kube-system get sa/kube-proxy --output=jsonpath='{.secrets[0].name}')JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET --output=jsonpath='{.data.token}' | base64 -d)PKI_DIR=/etc/kubernetes/pkiK8S_DIR=/etc/kubernetes# 生成kube-proxy配置# --server: 指定自己的apiserver地址或者lb地址kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=:6443 --kubeconfig=${K8S_DIR}/kube-proxy.conf# kube-proxy秘钥设置kubectl config set-credentials kubernetes --token=${JWT_TOKEN} --kubeconfig=/etc/kubernetes/kube-proxy.confkubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.confkubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.conf

#把生成的 kube-proxy.conf 传给每个节点for NODE in k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3; doscp /etc/kubernetes/kube-proxy.conf $NODE:/etc/kubernetes/ done

3.2、配置kube-proxy.service

# 所有节点配置 kube-proxy.service 服务,一会儿设置为开机启动vi /usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kubernetes Kube ProxyDocumentation==network.tarGET@[Service]ExecStart=/usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.yaml --v=2Restart=alwaysRestartSec=10s[Install]WantedBy=multi-user.target

3.3、准备kube-proxy.yaml

一定注意修改自己的Pod网段范围

# 所有机器执行vi /etc/kubernetes/kube-proxy.yaml

apiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.conf#kube-proxy引导文件 qps: 5clusterCIDR: 196.16.0.0/16 #修改为自己的Pod-CIDRconfigSyncPeriod: 15m0sconntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0senableProfiling: falsehealthzBindAddress: 0.0.0.0:10256hostnameOverride: ""iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30sipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30skind: KubeProxyConfigurationmetricsBindAddress: 127.0.0.1:10249mode: "ipvs"nodePortAddresses: nulloomScoreAdj: -999portRange: ""udpIdleTimeout: 250ms

3.4、启动kube-proxy

所有节点启动

systemctl daemon-reload && systemctl enable --now kube-proxysystemctl status kube-proxy十、部署calico

可以参照calico私有云部署指南

# 下载官网calicocurl -o calico.yaml## 把这个镜像修改成国内镜像# 修改一些我们自定义的. 修改etcd集群地址sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: ":2379,:2379,:2379"#g' calico.yaml# etcd的证书内容,需要base64编码设置到yaml中ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.pem | base64 -w 0 `ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 -w 0 `ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 -w 0 `# 替换etcd中的证书base64编码后的内容sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico.yaml#打开 etcd_ca 等默认设置(calico启动后自己生成)。sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico.yaml# 修改自己的Pod网段 196.16.0.0/16POD_SUBNET="196.16.0.0/16"sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico.yaml# 一定确定自己是否修改好了#确认calico是否修改好grep "CALICO_IPV4POOL_CIDR" calico.yaml -A 1

# 应用calico配置kubectl apply -f calico.yaml十一、部署coreDNS

git clone deployment/kubernetes#10.96.0.10 改为 service 网段的 第 10 个ip./deploy.sh -s -i 10.96.0.10 | kubectl apply -f -十二、给机器打上role标签

kubectl label node k8s-master1 node-role.kubernetes.io/master=''kubectl label node k8s-master2 node-role.kubernetes.io/master=''kubectl label node k8s-master3 node-role.kubernetes.io/master=''kubectl taints node k8s-master1 十三、集群验证验证Pod网络可访问性

- 同名称空间,不同名称空间可以使用 ip 互相访问

- 跨机器部署的Pod也可以互相访问验证Service网络可访问性

- 集群机器使用serviceIp可以负载均衡访问

- pod内部可以访问service域名 serviceName.namespace

- pod可以访问跨名称空间的service

# 部署以下内容进行测试apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-01 namespace: default labels:app: nginx-01spec: selector:matchLabels:app: nginx-01 replicas: 1 template:metadata:labels:app: nginx-01spec:containers:- name: nginx-01image: nginx---apiVersion: v1kind: Servicemetadata: name: nginx-svc namespace: defaultspec: selector:app: nginx-01 type: ClusterIP ports: - name: nginx-svcport: 80targetPort: 80protocol: TCP---apiVersion: v1kind: Namespacemetadata: name: hellospec: {}---apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-hello namespace: hello labels:app: nginx-hellospec: selector:matchLabels:app: nginx-hello replicas: 1 template:metadata:labels:app: nginx-hellospec:containers:- name: nginx-helloimage: nginx---apiVersion: v1kind: Servicemetadata: name: nginx-svc-hello namespace: hellospec: selector:app: nginx-hello type: ClusterIP ports: - name: nginx-svc-helloport: 80targetPort: 80protocol: TCP

# 给两个master标识为workerkubectl label node k8s-node3 node-role.kubernetes.io/worker=''kubectl label node k8s-master3 node-role.kubernetes.io/worker=''kubectl label node k8s-node1 node-role.kubernetes.io/worker=''kubectl label node k8s-node2 node-role.kubernetes.io/worker=''# 给master1打上污点。二进制部署的集群,默认master是没有污点的,可以任意调度。我们最好给一个master打上污点,保证master最小可用kubectl label node k8s-master3 node-role.kubernetes.io/master=''kubectl taint nodes k8s-master1 node-role.kubernetes.io/master=:NoSchedule

学习Kubernetes的关键一步就是要学会搭建一套k8s集群。在今天的文章中作者将最近新总结的搭建技巧,无偿分享给大家!废话不多说,直接上干货!

01、系统环境准备

要安装部署Kubernetes集群,首先需要准备机器,最直接的办法可以到公有云(如阿里云等)申请几台虚拟机。而如果条件允许,拿几台本地物理服务器来组建集群自然是最好不过了。但是这些机器需要满足以下几个条件:

要求64位Linux操作系统,且内核版本要求3.10及以上,能满足安装Docker项目所需的要求;机器之间要保持网络互通,这是未来容器之间网络互通的前提条件;要有外网访问权限,因为部署的过程中需要拉取相应的镜像,要求能够访问到gcr.io、quay.io这两个docker registry,因为有小部分镜像需要从这里拉取;单机可用资源建议2核CPU、8G内存或以上,如果小一点也可以但是能调度的Pod数量就比较有限了;磁盘空间要求在30GB以上,主要用于存储Docker镜像及相关日志文件;

在本次实验中我们准备了两台虚拟机,其具体配置如下:

2核CPU、2GB内存,30GB的磁盘空间;Unbantu 20.04 LTS的Sever版本,其Linux内核为5.4.0;内网互通,外网访问权限不受控制;02、Kubernetes集群部署工具Kubeadm介绍

作为典型的分布式系统,Kubernetes的部署一直是困扰初学者进入Kubernetes世界的一大障碍。在发布早期Kubernetes的部署主要依赖于社区维护的各种脚本,但这其中会涉及二进制编译、配置文件以及kube-apiserver授权配置文件等诸多运维工作。目前各大云服务厂商常用的Kubernetes部署方式是使用SaltStack、Ansible等运维工具自动化地执行这些繁琐的步骤,但即使这样,这个部署的过程对于初学者来说依然是非常繁琐的。

正是基于这样的痛点,在志愿者的推动下Kubernetes社区终于发起了kubeadm这一独立的一键部署工具,使用kubeadm我们可以通过几条简单的指令来快速地部署一个kubernetes集群。在接下来的内容中,就将具体演示如何使用kubeadm来部署一个简单结构的Kubernetes集群。

03、安装kubeadm及Docker环境

正是基于这样的痛点,在志愿者的推动下Kubernetes社区终于发起了kubeadm这一独立的一键部署工具,使用kubeadm我们可以通过几条简单的指令来快速地部署一个kubernetes集群。在接下来的内容中,就将具体演示如何使用kubeadm来部署一个简单结构的Kubernetes集群。

前面简单介绍了Kubernetes官方发布一键部署工具kubeadm,只需要添加kubeadm的源,然后直接用yum安装即可,具体操作如下:

1)、编辑操作系统安装源配置文件,添加kubernetes镜像源,命令如下:

#添加Docker阿里镜像源[root@centos-linux ~]# wget -O /etc/yum.repos.d/docker-ce.repo#安装Docker[root@centos-linux ~]# yum -y install docker-ce-18.09.9-3.el7#启动Docker并设置开机启动[root@centos-linux ~]# systemctl enable docker添加Kubernetes yum镜像源,由于网络原因,也可以换成国内Ubantu镜像源,如阿里云镜像源地址:添加阿里云Kubernetes yum镜像源# cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl==1gpgcheck=0repo_gpgcheck=0gpgkey=

2)、完成上述步骤后就可以通过yum命令安装kubeadm了,如下:

[root@centos-linux ~]# yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0当前版本是最新版本1.21,这里安装1.20。#查看安装的kubelet版本信息[root@centos-linux ~]# kubectl versionClient Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}The connection to the server localhost:8080 was refused - did you specify the right host or port?在上述安装kubeadm的过程中,kubeadm和kubelet、kubectl、kubernetes-cni这几个kubernetes核心组件的二进制文件也都会被自动安装好。

3)、Docker服务启动及限制修改

在具体运行kubernetes部署之前需要对Docker的配置信息进行一些调整。首先,编辑系统/etc/default/grub文件,在配置项GRUB_CMDLINE_LINUX中添加如下参数:

GRUB_CMDLINE_LINUX=" cgroup_enable=memory swapaccount=1"

完成编辑后保存执行如下命令,并重启服务器,命令如下:

root@kubernetesnode01:/opt/kubernetes-config# reboot

上述修改主要解决的是可能出现的“docker警告WARNING: No swap limit support”问题。其次,编辑创建/etc/docker/daemon.json文件,添加如下内容:

# cat > /etc/docker/daemon.json <<EOF{ "registry-mirrors": [""], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": {"max-size": "100m" }, "storage-driver": "overlay2"}EOF完成保存后执行重启Docker命令,如下:# systemctl restart docker此时可以查看Docker的Cgroup信息,如下:# docker info | grep CgroupCgroup Driver: systemd

上述修改主要解决的是“Docker cgroup driver. The recommended driver is "systemd"”的问题。需要强调的是以上修改只是作者在具体安装操作是遇到的具体问题的解决整理,如在实践过程中遇到其他问题还需要自行查阅相关资料! 最后,需要注意由于kubernetes禁用虚拟内存,所以要先关闭掉swap否则就会在kubeadm初始化kubernetes的时候报错,具体如下:

# swapoff -a

该命令只是临时禁用swap,如要保证系统重启后仍然生效则需要“vim /etc/fstab”文件,并注释掉swap那一行。

04、部署Kubernetes的Master节点

在Kubernetes中Master节点是集群的控制节点,它是由三个紧密协作的独立组件组合而成,分别是负责API服务的kube-apiserver、负责调度的kube-scheduler以及负责容器编排的kube-controller-manager,其中整个集群的持久化数据由kube-apiserver处理后保存在Etcd中。 要部署Master节点可以直接通过kubeadm进行一键部署,但这里我们希望能够部署一个相对完整的Kubernetes集群,可以通过配置文件来开启一些实验性的功能。具体在系统中新建/opt/kubernetes-config/目录,并创建一个给kubeadm用的YAML文件(kubeadm.yaml),具体内容如下:

apiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationcontrollerManager:extraArgs:horizontal-pod-autoscaler-use-rest-clients: "true"horizontal-pod-autoscaler-sync-period: "10s"node-monitor-grace-period: "10s"apiServer: extraArgs:runtime-config: "api/all=true"kubernetesVersion: "v1.20.0"

在上述yaml配置文件中“horizontal-pod-autoscaler-use-rest-clients: "true"”这个配置,表示将来部署的kuber-controller-manager能够使用自定义资源(Custom Metrics)进行自动水平扩展,感兴趣的读者可以自行查阅相关资料!而“v1.20.0”就是要kubeadm帮我们部署的Kubernetes版本号。

需要注意的是,如果执行过程中由于国内网络限制问题导致无法下载相应的Docker镜像,可以根据报错信息在国内网站(如阿里云)上找到相关镜像,然后再将这些镜像重新tag之后再进行安装。具体如下:

#从阿里云Docker仓库拉取Kubernetes组件镜像docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.20.0docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.20.0docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.20.0docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.20.0docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

下载完成后再将这些Docker镜像重新tag下,具体命令如下:

#重新tag镜像docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.20.0 k8s.gcr.io/kube-apiserver:v1.20.0docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0此时通过Docker命令就可以查看到这些Docker镜像信息了,命令如下:root@kubernetesnode01:/opt/kubernetes-config# docker imagesREPOSITORYTAGIMAGE IDCREATEDSIZEk8s.gcr.io/kube-proxyv1.18.14e68534e24f62 months ago117MBregistry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64v1.18.14e68534e24f62 months ago117MBk8s.gcr.io/kube-controller-managerv1.18.1d1ccdd18e6ed2 months ago162MBregistry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64v1.18.1d1ccdd18e6ed2 months ago162MBk8s.gcr.io/kube-apiserverv1.18.1a595af0107f92 months ago173MBregistry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64v1.18.1a595af0107f92 months ago173MBk8s.gcr.io/kube-schedulerv1.18.16c9320041a7b2 months ago95.3MBregistry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64v1.18.16c9320041a7b2 months ago95.3MBk8s.gcr.io/pause3.280d28bedfe5d4 months ago683kBregistry.cn-hangzhou.aliyuncs.com/google_containers/pause3.280d28bedfe5d4 months ago683kBk8s.gcr.io/coredns1.6.767da37a9a3604 months ago43.8MBregistry.cn-hangzhou.aliyuncs.com/google_containers/coredns1.6.767da37a9a3604 months ago43.8MBk8s.gcr.io/etcd3.4.3-0303ce5db0e908 months ago288MBregistry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd643.4.3-0303ce5db0e908 months ago288MB

解决镜像拉取问题后再次执行kubeadm部署命令就可以完成Kubernetes Master控制节点的部署了,具体命令及执行结果如下:

root@kubernetesnode01:/opt/kubernetes-config# kubeadm init --config kubeadm.yaml --v=5...Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: you can join any number of worker nodes by running the following on each as root:kubeadm join 10.211.55.13:6443 --token yi9lua.icl2umh9yifn6z9k --discovery-token-ca-cert-hash sha256:074460292aa167de2ae9785f912001776b936cec79af68cec597bd4a06d5998d

从上面部署执行结果中可以看到,部署成功后kubeadm会生成如下指令:

kubeadm join 10.211.55.13:6443 --token yi9lua.icl2umh9yifn6z9k --discovery-token-ca-cert-hash sha256:074460292aa167de2ae9785f912001776b936cec79af68cec597bd4a06d5998d

这个kubeadm join命令就是用来给该Master节点添加更多Worker(工作节点)的命令,后面具体部署Worker节点的时候将会使用到它。此外,kubeadm还会提示我们第一次使用Kubernetes集群所需要配置的命令:

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

而需要这些配置命令的原因在于Kubernetes集群默认是需要加密方式访问的,所以这几条命令就是将刚才部署生成的Kubernetes集群的安全配置文件保存到当前用户的.kube目录,之后kubectl会默认使用该目录下的授权信息访问Kubernetes集群。如果不这么做的化,那么每次通过集群就都需要设置“export KUBECONFIG 环境变量”来告诉kubectl这个安全文件的位置。

执行完上述命令后,现在我们就可以使用kubectl get命令来查看当前Kubernetes集群节点的状态了,执行效果如下:

# kubectl get nodesNAMESTATUSROLESAGEVERSIONcentos-linux.sharedNotReadycontrol-plane,master6m55sv1.20.0

在以上命令输出的结果中可以看到Master节点的状态为“NotReady”,为了查找具体原因可以通过“kuberctl describe”命令来查看下该节点(Node)对象的详细信息,命令如下:

# kubectl describe node centos-linux.shared

该命令可以非常详细地获取节点对象的状态、事件等详情,这种方式也是调试Kubernetes集群时最重要的排查手段。根据显示的如下信息:

...Conditions...Ready False... KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized...

可以看到节点处于“NodeNotReady”的原因在于尚未部署任何网络插件,为了进一步验证这一点还可以通过kubectl检查这个节点上各个Kubernetes系统Pod的状态,命令及执行效果如下:

# kubectl get pods -n kube-systemNAMEREADYSTATUSRESTARTSAGEcoredns-66bff467f8-l4wt60/1Pending064mcoredns-66bff467f8-rcqx60/1Pending064metcd-kubernetesnode011/1Running064mkube-apiserver-kubernetesnode011/1Running064mkube-controller-manager-kubernetesnode011/1Running064mkube-proxy-wjct71/1Running064mkube-scheduler-kubernetesnode011/1Running064m

命令中“kube-system”表示的是Kubernetes项目预留的系统Pod空间(Namespace),需要注意它并不是Linux Namespace,而是Kuebernetes划分的不同工作空间单位。回到命令输出结果,可以看到coredns等依赖于网络的Pod都处于Pending(调度失败)的状态,这样说明了该Master节点的网络尚未部署就绪。

05、部署Kubernetes网络插件

前面部署Master节点中由于没有部署网络插件,所以节点状态显示“NodeNotReady”状态。接下来的内容我们就来具体部署下网络插件。在Kubernetes“一切皆容器”的设计理念指导下,网络插件也会以独立Pod的方式运行在系统中,所以部署起来也很简单只需要执行“kubectl apply”指令即可,例如以Weave网络插件为例:

# kubectl apply -f ?k8s-version=$(kubectl version | base64 | tr -d ' ')serviceaccount/weave-net createdclusterrole.rbac.authorization.k8s.io/weave-net createdclusterrolebinding.rbac.authorization.k8s.io/weave-net createdrole.rbac.authorization.k8s.io/weave-net createdrolebinding.rbac.authorization.k8s.io/weave-net createddaemonset.apps/weave-net created

部署完成后通过“kubectl get”命令重新检查Pod的状态:

# kubectl get pods -n kube-systemNAMEREADYSTATUSRESTARTSAGEcoredns-66bff467f8-l4wt61/1Running0116mcoredns-66bff467f8-rcqx61/1Running0116metcd-kubernetesnode011/1Running0116mkube-apiserver-kubernetesnode011/1Running0116mkube-controller-manager-kubernetesnode011/1Running0116mkube-proxy-wjct71/1Running0116mkube-scheduler-kubernetesnode011/1Running0116mweave-net-746qj2/2Running014m

可以看到,此时所有的系统Pod都成功启动了,而刚才部署的Weave网络插件则在kube-system下面新建了一个名叫“weave-net-746qj”的Pod,而这个Pod就是容器网络插件在每个节点上的控制组件。

到这里,Kubernetes的Master节点就部署完成了,如果你只需要一个单节点的Kubernetes,那么现在就可以使用了。但是在默认情况下,Kubernetes的Master节点是不能运行用户Pod的,需要通过额外的操作进行调整,在本文的最后将会介绍到它。

06、部署Worker节点

为了构建一个完整的Kubernetes集群,这里还需要继续介绍如何部署Worker节点。实际上Kubernetes的Worker节点和Master节点几乎是相同的,它们都运行着一个kubelet组件,主要的区别在于“kubeadm init”的过程中,kubelet启动后,Master节点还会自动启动kube-apiserver、kube-scheduler及kube-controller-manager这三个系统Pod。

在具体部署之前与Master节点一样,也需要在所有Worker节点上执行前面“安装kubeadm及Decker环境”小节中的所有步骤。之后在Worker节点执行部署Master节点时生成的“kubeadm join”指令即可,具体如下:

root@kubenetesnode02:~# kubeadm join 10.211.55.6:6443 --token jfulwi.so2rj5lukgsej2o6--discovery-token-ca-cert-hash sha256:d895d512f0df6cb7f010204193a9b240e8a394606090608daee11b988fc7fea6 --v=5...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

完成集群加入后为了便于在Worker节点执行kubectl相关命令,需要进行如下配置:

#创建配置目录root@kubenetesnode02:~# mkdir -p $HOME/.kube#将Master节点中$/HOME/.kube/目录中的config文件拷贝至Worker节点对应目录root@kubenetesnode02:~# scp root@10.211.55.6:$HOME/.kube/config $HOME/.kube/#权限配置root@kubenetesnode02:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

之后可以在Worker或Master节点执行节点状态查看命令“kubectl get nodes”,具体如下:

root@kubernetesnode02:~# kubectl get nodesNAMESTATUSROLESAGEVERSIONkubenetesnode02NotReady<none>33mv1.18.4kubernetesnode01Readymaster29hv1.18.4复制代码

通过节点状态显示此时Work节点还处于NotReady状态,具体查看节点描述信息如下:

root@kubernetesnode02:~# kubectl describe node kubenetesnode02...Conditions:...Ready False ... KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized...

根据描述信息,发现Worker节点NotReady的原因也在于网络插件没有部署,继续执行“部署Kubernetes网络插件”小节中的步骤即可。但是要注意部署网络插件时会同时部署kube-proxy,其中会涉及从k8s.gcr.io仓库获取镜像的动作,如果无法访问外网可能会导致网络部署异常,这里可以参考前面安装Master节点时的做法,通过国内镜像仓库下载后通过tag的方式进行标记,具体如下:

#从阿里云拉取必要镜像docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.20.0docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2#将镜像重新打tagdocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2如若一切正常,则继续查看节点状态,命令如下:root@kubenetesnode02:~# kubectl get nodeNAMESTATUSROLESAGEVERSIONkubenetesnode02Ready<none>7h52mv1.20.0kubernetesnode01Readymaster37hv1.20.0

可以看到此时Worker节点的状态已经变成“Ready”,不过细心的读者可能会发现Worker节点的ROLES并不像Master节点那样显示“master”而是显示了,这是因为新安装的Kubernetes环境Node节点有时候会丢失ROLES信息,遇到这种情况可以手工进行添加,具体命令如下:

root@kubenetesnode02:~# kubectl label node kubenetesnode02 node-role.kubernetes.io/worker=worker复制代码

再次运行节点状态命令就能看到正常的显示了,命令效果如下:

root@kubenetesnode02:~# kubectl get nodeNAMESTATUSROLESAGEVERSIONkubenetesnode02Readyworker8hv1.18.4kubernetesnode01Readymaster37hv1.18.4

到这里就部署完成了具有一个Master节点和一个Worker节点的Kubernetes集群了,作为实验环境它已经具备了基本的Kubernetes集群功能!

07、部署Dashboard可视化插件

在Kubernetes社区中,有一个很受欢迎的Dashboard项目,它可以给用户一个可视化的Web界面来查看当前集群中的各种信息。该插件也是以容器化方式进行部署,操作也非常简单,具体可在Master、Worker节点或其他能够安全访问Kubernetes集群的Node上进行部署,命令如下:

root@kubenetesnode02:~# kubectl apply -f

部署完成后就可以查看Dashboard对应的Pod运行状态,执行效果如下:

root@kubenetesnode02:~# kubectl get pods -n kubernetes-dashboardNAMEREADYSTATUSRESTARTSAGEdashboard-metrics-scraper-6b4884c9d5-xfb8b1/1Running012hkubernetes-dashboard-7f99b75bf4-9lxk81/1Running012h

除此之外还可以查看Dashboard的服务(Service)信息,命令如下:

root@kubenetesnode02:~# kubectl get svc -n kubernetes-dashboardNAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGEdashboard-metrics-scraperClusterIP10.97.69.158<none>8000/TCP13hkubernetes-dashboardClusterIP10.111.30.214<none>443/TCP13h

需要注意的是,由于Dashboard是一个Web服务,从安全角度出发Dashboard默认只能通过Proxy的方式在本地访问。具体方式为在本地机器安装kubectl管理工具,并将Master节点$HOME/.kube/目录中的config文件拷贝至本地主机相同目录,之后运行“kubectl proxy”命令,如下:

qiaodeMacBook-Pro-2:.kube qiaojiang$ kubectl proxyStarting to serve on 127.0.0.1:8001

本地proxy代理启动后,访问Kubernetes Dashboard地址,具体如下:

:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

如果访问正常,就会看到如下图所示界面:

如上图所示Dashboard访问需要进行身份认证,主要有Token及Kubeconfig两种方式,这里我们选择Token的方式,而Token的生成步骤如下:

1)、创建一个服务账号

首先在命名空间kubernetes-dashboard中创建名为admin-user的服务账户,具体步骤为在本地目录创建类似“dashboard-adminuser.yaml”文件,具体内容如下:

apiVersion: v1kind: ServiceAccountmetadata: name: admin-user namespace: kubernetes-dashboard

编写文件后具体执行创建命令:

qiaodeMacBook-Pro-2:.kube qiaojiang$ kubectl apply -f dashboard-adminuser.yamlWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl applyserviceaccount/admin-user configured

2)、创建ClusterRoleBinding

在使用kubeadm工具配置完Kubernetes集群后,集群中已经存在ClusterRole集群管理,可以使用它为上一步创建的ServiceAccount创建ClusterRoleBinding。具体步骤为在本地目录创建类似“dashboard-clusterRoleBingding.yaml”的文件,具体内容如下:

apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: admin-userroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-adminsubjects:- kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard

执行创建命令:

qiaodeMacBook-Pro-2:.kube qiaojiang$ kubectl apply -f dashboard-clusterRoleBingding.yamlclusterrolebinding.rbac.authorization.k8s.io/admin-user created

3)、获取Bearer Token

接下来执行获取Bearer Token的命令,具体如下:

qiaodeMacBook-Pro-2:.kube qiaojiang$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')Name:admin-user-token-xxq2bNamespace:kubernetes-dashboardLabels:<none>Annotations: kubernetes.io/service-account.name: admin-userkubernetes.io/service-account.uid: 213dce75-4063-4555-842a-904cf4e88ed1Type: kubernetes.io/service-account-tokenData====ca.crt:1025 bytesnamespace: 20 bytestoken:eyJhbGciOiJSUzI1NiIsImtpZCI6IlplSHRwcXhNREs0SUJPcTZIYU1kT0pidlFuOFJaVXYzLWx0c1BOZzZZY28ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXh4cTJiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTNkY2U3NS00MDYzLTQ1NTUtODQyYS05MDRjZjRlODhlZDEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.MIjSewAk4aVgVCU6fnBBLtIH7PJzcDUozaUoVGJPUu-TZSbRZHotugvrvd8Ek_f5urfyYhj14y1BSe1EXw3nINmo4J7bMI94T_f4HvSFW1RUznfWZ_uq24qKjNgqy4HrSfmickav2PmGv4TtumjhbziMreQ3jfmaPZvPqOa6Xmv1uhytLw3G6m5tRS97kl0i8A1lqnOWu7COJX0TtPkDrXiPPX9IzaGrp3Hd0pKHWrI_-orxsI5mmFj0cQZt1ncHarCssVnyHkWQqtle4ljV2HAO-bgY1j0E1pOPTlzpmSSbmAmedXZym77N10YNaIqtWvFjxMzhFqeTPNo539V1Gg

获取Token后回到前面的认证方式选择界面,将获取的Token信息填入就可以正式进入Dashboard的系统界面,看到Kubernetes集群的详细可视化信息了,如图所示:

到这里就完成了Kubernetes可视化插件的部署并通过本地Proxy的方式进行了登录。在实际的生产环境中如果觉得每次通过本地Proxy的方式进行访问不够方便,也可以使用Ingress方式配置集群外访问Dashboard,感兴趣的读者可以自行尝试下。也可以先通过通过暴露端口,设置dashboard的访问,例如:

#查看svc名称# kubectl get sc -n kubernetes-dashboard# kubectl edit services -n kubernetes-dashboard kubernetes-dashboard

然后修改配置文件,如下:

ports: - nodePort: 30000port: 443protocol: TCPtargetPort: 8443 selector:k8s-app: kubernetes-dashboard sessionAffinity: None type: NodePort

之后就可以通过IP+nodePort端口访问了!例如:

:30000/08、Master调整Taint/Toleration策略**

在前面我们提到过,Kubernetes集群的Master节点默认情况下是不能运行用户Pod的。而之所以能够达到这样的效果,Kubernetes依靠的正是Taint/Toleration机制;而该机制的原理是一旦某个节点被加上“Taint”就表示该节点被“打上了污点”,那么所有的Pod就不能再在这个节点上运行。

而Master节点之所以不能运行用户Pod,就在于其运行成功后会为自身节点打上“Taint”从而达到禁止其他用户Pod运行在Master节点上的效果(不影响已经运行的Pod),具体可以通过命令查看Master节点上的相关信息,命令执行效果如下:

root@kubenetesnode02:~# kubectl describe node kubernetesnode01Name:kubernetesnode01Roles:master...Taints:node-role.kubernetes.io/master:NoSchedule...

可以看到Master节点默认被加上了“node-role.kubernetes.io/master:NoSchedule”这样的“污点”,其中的值“NoSchedule”意味着这个Taint只会在调度新的Pod时产生作用,而不会影响在该节点上已经运行的Pod。如果在实验中只想要一个单节点的Kubernetes,那么可以在删除Master节点上的这个Taint,具体命令如下:

root@kubernetesnode01:~# kubectl taint nodes --all node-role.kubernetes.io/master-

上述命令通过在“nodes --all node-role.kubernetes.io/master”这个键后面加一个短横线“-”表示移除所有以该键为键的Taint。

到这一步,一个基本的Kubernetes集群就部署完成了,通过kubeadm这样的原生管理工具,Kubernetes的部署被大大简化了,其中像证书、授权及各个组件配置等最麻烦的操作,kubeadm都帮我们完成了。

09、Kubernetes集群重启命令

如果服务器断电,或者重启,可通过如下命令重启集群:

#重启dockersystemctl daemon-reloadsystemctl restart docker#重启kubeletsystemctl restart kubelet.service

以上就是在CentOS 7 系统环境下搭建一组Kubernetes学习集群的详细步骤,其它Linux发行版本的部署方法也类似,大家可以根据自己的需求选择!

我整理的《最全Java高级架构面试知识点整理》已升级为2.0版本,200个知识点,178页。私信我“Java”获取。

环境如下:

一、环境分析:

1、2个调度器和2个web节点使用同一个网段地址,可以直接和外网通信。为了共享存储的

安全性,一般将web节点和存储服务器规划到内网环境,所以web节点必须有两个及以上

网卡的接口。

2、我这里资源有限,也为了配置方便,所以调度器和web节点分别只有两个,在web访问请

求量不大的情况下,足够了,但是若访问请求比较大,那么最少要分别配置三个调度器和

web节点,如果只有两个web节点的话,访问量又比较大,那么一旦有一个宕机了,那剩下

一个独苗必定会因为扛不住激增的访问请求,而被打死。

3、准备系统映像,以便安装相关服务。

4、自行配置防火墙策略和除了VIP之外的IP地址(我这里直接关闭了防火墙)。

5、keepalived会自动调用IP_vs模块,所以无需手动加载。

二、开始搭建:

配置主调度器:

[root@lvs1 /]# yum -y install ipvsadm keepalived # 安装keepalived 和 ipvsadm管理工具[root@lvs1 keepalived]# vim /etc/sysctl.conf # 调整内核参数,关闭ICMP重定向...........net.ipv4.conf.all.send_redirects = 0net.ipv4.conf.default.send_redirects = 0net.ipv4.conf.ens33.send_redirects = 0[root@lvs1 /]# sysctl -p # 刷新使配置生效net.ipv4.conf.all.send_redirects = 0net.ipv4.conf.default.send_redirects = 0net.ipv4.conf.ens33.send_redirects = 0[root@lvs1 /]# cd /etc/keepalived/[root@lvs1 keepalived]# cp keepalived.conf keepalived.conf.bak # 复制一份keepalived 主配文件作为备份,以免修改时出错[root@lvs1 /]# vim /etc/keepalived/keepalived.conf # 编辑主配文件! Configuration File for keepalivedglobal_defs { notification_email { acassen@firewall.loc failover@firewall.loc # 当出错时,将报错信息发送到的收件人地址,可根据需要填写 sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc #发件人姓名、地址(可不做修改) smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS1 #本服务器的名称改一下,在群集中所有调度器名称里必须唯一} vrrp_instance VI_1 { state MASTER # 设为主调度器 interface ens33 #承载VIP地址的物理网卡接口根据实际情况改一下 virtual_router_id 51 priority 100 # 主调度器的优先级 advert_int 1 authentication { # 主 从热备认证信息 auth_type PASS auth_pass 1111 } virtual_ipaddress { # 指定群集 VIP地址 200.0.0.100 } }virtual_server 200.0.0.100 80 { # 虚拟服务器地址(VIP) 端口 delay_loop 15 # 健康检查的间隔时间 lb_algo rr # 轮询调度算法 lb_kind DR # 指定工作模式,这里为DR,也可改为NAT ! persistence_timeout 50 #为了一会测试看到效果,将连接保持这行前加“ !”将该行注释掉 protocol TCP real_server 200.0.0.3 80 { # web节点的地址及端口 weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 200.0.0.4 80 { # 另一 web节点地址及端口 weight 1 TCP_CHECK { connect_port 80 # 配置连接端口 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } }}[root@lvs1 /]# systemctl restart keepalived [root@lvs1 /]# systemctl enable keepalived

主调度器到这就告一段落配置完成了:

配置从调度器:

[root@localhost /]# yum -y install keepalived ipvsadm[root@localhost /]# scp root@200.0.0.1:/etc/sysctl.conf /etc/ # 可通过scp命令将配置较繁杂的复制过来root@200.0.0.1's password: sysctl.conf100% 566 0.6KB/s 00:00 [root@localhost /]# sysctl -p[root@localhost /]# sysctl -p # 刷新使配置生效net.ipv4.conf.all.send_redirects = 0net.ipv4.conf.default.send_redirects = 0net.ipv4.conf.ens33.send_redirects = 0[root@localhost /]# vim /etc/keepalived/keepalived.conf ......................router_id LVS2 # route-id 要不一样vrrp_instance VI_1 { state BACKUP # 状态改为 BACKUP 最好大写 interface ens33 # 网卡如果一样的话可不更改 virtual_router_id 51 priority 90 # 优先级要比主调度器小 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { # 就需要改这些其他配置均与主调度器相同 200.0.0.100 }}[root@localhost /]# systemctl enable keepalived[root@localhost /]# systemctl restart keepalived # 重启服务使配置生效

若需要部署多个从调度器,按照以上这个从(备份)调度器配置即可

web1节点配置:

[root@web1 /]# cd /etc/sysconfig/network-scripts/[root@web1 network-scripts]# cp ifcfg-lo ifcfg-lo:0[root@web1 network-scripts]# vim ifcfg-lo:0DEVICE=lo:0IPADDR=200.0.0.100 # VIP 地址NETMASK=255.255.255.255 # 掩码为1ONBOOT=yes[root@web1 network-scripts]# ifup lo:0 # 启动虚接口[root@web1 network-scripts]# ifconfig lo:0 # 查看配置有无生效lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 200.0.0.100 netmask 255.255.255.255 loop txqueuelen 1 (Local Loopback)[root@web1 /]# route add -host 200.0.0.100 dev lo:0 # 添加本地路由[root@web1 /]# vim /etc/rc.local #设置开机自动,添加这条路由记录 ................................/sbin/route add -host 200.0.0.100 dev lo:0[root@web1 /]# vim /etc/sysctl.conf # 调整/proc参数,关闭 ARP响应net.ipv4.conf.all.arp_ignore = 1net.ipv4.conf.all.arp_announce = 2net.ipv4.conf.default.arp_ignore = 1net.ipv4.conf.default.arp_announce = 2net.ipv4.conf.lo.arp_ignore = 1net.ipv4.conf.lo.arp_announce = 2[root@web1 /]# sysctl -p # 刷新使配置生效net.ipv4.conf.all.arp_ignore = 1net.ipv4.conf.all.arp_announce = 2net.ipv4.conf.default.arp_ignore = 1net.ipv4.conf.default.arp_announce = 2net.ipv4.conf.lo.arp_ignore = 1net.ipv4.conf.lo.arp_announce = 2[root@web1 /]# yum -y install httpd[root@web1 /]# echo test1.com > /var/www/html/index.html[root@web1 /]# systemctl start httpd[root@web1 /]# systemctl enable httpd

web2节点和web1节点配置相同,这里我就省略了,但是这里我为了方便看出验证效果,将web2的测试文件写为test2.com

若访问到的是同一页面,在排除配置上错误的情况下,可以打开多个网页,或者稍等一会再刷新,因为它可能有一个保持连接的时间,所以会存在延迟。

三、搭建 NFS 共享存储服务:

[root@nfs /]# mkdir opt/wwwroot[root@nfs /]# vim /etc/exports # 编写配置文件/opt/wwwroot 192.168.1.0/24(rw,sync,no_root_squash)[root@nfs /]# systemctl restart nfs # 重启服务使配置生效[root@nfs /]# systemctl restart rpcbind[root@nfs /]# showmount -e # 查看本机发布的挂载目录Export list for nfs:/opt/wwwroot 192.168.1.0/24[root@nfs /]# echo nfs.test.com > /opt/wwwroot/index.html

所有节点挂载共享存储目录:

[root@web1 /]# showmount -e 192.168.1.5 # 查看共享服务器共享的所有目录Export list for 192.168.1.5:/opt/wwwroot 192.168.1.0/24[root@web1 /]# mount 192.168.1.5:/opt/wwwroot/ /var/www/html/ # 挂载到本地[root@web1 /]# vim /etc/fstab #设置自动挂载 .........................192.168.1.5:/opt/wwwroot /var/www/html nfs defaults,_netdev 0 0

web1和web2都需要挂载

1)VIP在哪个调度器上,查询该调度器承载VIP地址的物理接口,即可看到VIP地址(VIP地址在备份调度器上查不到的):

[root@LVS1 ~]# ip a show dev ens33 #查询承载VIP地址的物理网卡ens332: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> ate UP groupn 1000 link/ether 00:0c:29:77:2c:03 brd ff:ff:ff:ff:ff:ff inet 200.0.0.1/24 brd 200.0.0.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 200.0.0.100/32 scope global ens33 #VIP地址。 valid_lft forever preferred_lft forever inet6 fe80::95f8:eeb7:2ed2:d13c/64 scope link noprefixroute valid_lft forever preferred_lft forever

2)查询有哪些web节点:

[root@LVS1 ~]# ipvsadm -ln #查询web节点池及VIPIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 200.0.0.100:80 rr 200.0.0.3:80 Route 1 0 0 200.0.0.4:80 Route 1 0 0

3)模拟Web2节点和主调度器宕机,并在备份调度器上再次查询VIP以及web节点:

[root@LVS2 ~]# ip a show dev ens33 #可以看到VIP地址已经转移到了备份调度器上2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> link/ether 00:0c:29:9a:09:98 brd ff:ff:ff:ff:ff:ff inet 200.0.0.2/24 brd 200.0.0.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 200.0.0.100/32 scope global ens33 #VIP地址。 valid_lft forever preferred_lft forever inet6 fe80::3050:1a9b:5956:5297/64 scope link noprefixroute valid_lft forever preferred_lft forever[root@LVS2 ~]# ipvsadm -ln #Web2节点宕机后,就查不到了。IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 200.0.0.100:80 rr -> 200.0.0.3:80 Route 1 0 0 #当主调度器或Web2节点恢复正常后,将会自动添加到群集中,并且正常运行。

4)查看调度器故障切换时的日志消息:

[root@LVS2 ~]# tail -30 /var/log/messages