Centos7.9使用kubeadm部署K8S集群 使用kubeadm部署一个k8s集群,单master+2worker节点。
1. 环境信息
操作系统:CentOS 7.9.2009
内存: 2GB
CPU: 2
网络: 能够互访,能够访问互联网
hostname
ip
备注
k8s-master
192.168.0.51
master
k8s-node1
192.168.0.52
worker
k8s-node2
192.168.0.53
worker
2. 准备工作 在所有节点(包括 Master 和 Worker 节点)上执行以下步骤。
2.1 linux基础配置 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 systemctl stop firewalld && systemctl disable firewalld swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config timedatectl set-timezone Asia/Shanghai yum -y install ntpdate ntpdate time.windows.com hwclock --systohccat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
2.2 安装 Docker 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 # 添加镜像源 curl https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo# 查看docker-ce的版本列表 yum list docker-ce --showduplicates | sort -r# 安装20.10 yum -y install docker-ce-20.10.6-3.el7 systemctl start docker systemctl enable docker# 换成阿里Docker仓库 cat > /etc/docker/daemon.json << EOF { "registry-mirrors": ["https://wnsrsn9i.mirror.aliyuncs.com"] } EOF# 重启配置生效 systemctl restart docker docker info ... Registry Mirrors: https://wnsrsn9i.mirror.aliyuncs.com/ ...
2.3 安装 kubeadm、kubelet 和 kubectl 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 # 添加镜像源 cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF# 查看支持的版本 yum list kubelet --showduplicates | sort -r# 安装 yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0# 配置kubelet服务自启动 systemctl enable kubelet
3. 部署k8s集群 设置hosts:
1 2 3 4 5 6 7 8 9 10 # 设置主机名 hostnamectl set-hostname k8s-master # k8s-node1 / k8s-node2 hostname# 配置 hosts(只在master执行) cat >> /etc/hosts << EOF 192.168.0.51 k8s-master 192.168.0.52 k8s-node1 192.168.0.53 k8s-node2 EOF
初始化master:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 # 运行初始化命令,apiserver地址为master地址 kubeadm init \ --apiserver-advertise-address=192.168.0.51 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.18.0 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16 ... [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.51:6443 --token fxaizi.pb73yzhubpffc9zf \ --discovery-token-ca-cert-hash sha256:95b842305c484ffcdcf3d5ccdeb5ada6ee89f418e77709138b491654e88c88ed ...# 初始化成功后,按照提示执行如下命令 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config# 查看节点列表,此时节点状态为NotReady kubectl get nodes
初始化worker:
1 2 kubeadm join 192.168.0.51:6443 --token fxaizi.pb73yzhubpffc9zf \ --discovery-token-ca-cert-hash sha256:95b842305c484ffcdcf3d5ccdeb5ada6ee89f418e77709138b491654e88c88ed
master部署网络插件:
1 2 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl get pods -n kube-system # 查看运行状态
如果无法下载,手动创建kube-flannel.yml,内容如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 --- kind: Namespace apiVersion: v1 metadata: name: kube-flannel labels: k8s-app: flannel pod-security.kubernetes.io/enforce: privileged --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: flannel name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - get - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: flannel name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-flannel --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: flannel name: flannel namespace: kube-flannel --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-flannel labels: tier: node k8s-app: flannel app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "EnableNFTables": false, "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-flannel labels: tier: node app: flannel k8s-app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni-plugin image: docker.io/flannel/flannel-cni-plugin:v1.4.1-flannel1 command: - cp args: - -f - /flannel - /opt/cni/bin/flannel volumeMounts: - name: cni-plugin mountPath: /opt/cni/bin - name: install-cni image: docker.io/flannel/flannel:v0.25.4 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: docker.io/flannel/flannel:v0.25.4 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN" , "NET_RAW" ] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: EVENT_QUEUE_DEPTH value: "5000" volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ - name: xtables-lock mountPath: /run/xtables.lock volumes: - name: run hostPath: path: /run/flannel - name: cni-plugin hostPath: path: /opt/cni/bin - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate
部署flannel会拉取两个镜像,国内网络环境有时候无法顺利拉取,可以从其他地方获取后离线导入当前环境:
1 2 3 4 [root@k8s-master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE flannel/flannel v0.25.4 e6c43605b714 18 hours ago 81MB flannel/flannel-cni-plugin v1.4.1-flannel1 1e3c860c213d 7 weeks ago 10.3MB
4. 创建测试应用 1 2 3 4 5 6 7 8 9 10 11 12 # 创建一个nginx应用,并暴露到节点外部 kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 --type=NodePort# 查看部署的应用 kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/nginx-f89759699-j9lnv 1/1 Running 0 30s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 34m service/nginx NodePort 10.102.197.201 <none> 80:30510/TCP 19s
通过k8s节点ip+30510端口即可访问nginx。