运维开发网

kubernetes集群安装部署

运维开发网 https://www.qedev.com 2021-03-02 09:25 出处:51CTO 作者:小李子博客
kubernetes集群安装部署前言:这次实验所使用的系统版本为:Centos7.4,在其他版本这种方式未经测试,不知道有没有问题,可以尝试安装试试一、kubernetesMaster节点初始化1、关闭防火墙[[email protected]~]#systemctlstatusfirewalld●firewalld.service-firewalld-dynamicfirewalldaemonLoa

kubernetes集群安装部署

前言:这次实验所使用的系统版本为:Centos 7.4,在其他版本这种方式未经测试,不知道有没有问题,可以尝试安装试试

一、kubernetes Master节点初始化

1、关闭防火墙

[[email protected] ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

2、集群时间同步(NTP)

服务器端配置

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
- 注释其他server
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
##server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 127.0.0.1 //server指定lo网卡地址就可以了
fudge 127.0.0.1 startum 8

客户端

#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
server 192.168.0.218 iburst

测试时间同步

[[email protected] ~]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 192.168.0.218   .INIT.          16 u   23   64    0    0.000    0.000   0.000

3、实现免密要登入

ssh-keygen
ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]

4、设定主机名与host文件

# 分别设定node与master的主机名
hostnamectl set-hostname master
exec bash

# 同步所有主机的hosts文件
vim /etc/hosts
192.168.0.3 master localhost
192.168.1.182 node1  localhost
192.168.1.218 node2  localhost

5、关闭swap

# swapoff -a
# lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    253:0    0  40G  0 disk 
├─vda1 253:1    0   4G  0 part 
└─vda2 253:2    0  36G  0 part /

cat /etc/fstab 
#
# /etc/fstab
# Created by anaconda on Wed May 29 10:22:23 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=cd57b66f-58d9-4a4c-8acd-f5b51fb0bfc7 /                       ext4    defaults        1 1
#UUID=c45fd23f-2d60-4474-9e8d-1e329573fb26 swap                    swap    defaults        0 0

6、配置阿里云yum源,并安装 kubelet kubeadm kubectl docker-ce

1、配置kubernetes镜像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0

~]# ~]# yum list kubelet --showduplicates | sort -r |grep 1.15.10-0
    kubelet.x86_64                       1.15.10-0                       kubernetes 
    kubelet.x86_64                       1.15.10-0                       @kubernetes
~]# yum install -y  kubeadm-1.15.10-0  kubelet-1.15.10-0 
~]# systemctl enable kubelet && systemctl start kubelet

2、安装docker-ce
# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/Linux/centos/docker-ce.repo
# Step 3: 更新并安装 Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
# Step 4: 开启Docker服务
sudo systemctl start docker  && systemctl status  docker 

7、将kubeadm 默认init初始化master配置导出到一个yml配置文件,修改镜像下载源和配置POD网段地址

# kubeadm  config print init-defaults >kubeadm-config.yml

配置信息如下
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
#localAPIEndpoint:                          #注释
#  advertiseAddress: 1.2.3.4                #注释
#  bindPort: 6443                           #注释
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master.novalocal
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: mirrorgooglecontainers   #修改默认的镜像下载源(k8s.gcr.io)
kind: ClusterConfiguration
kubernetesVersion: v1.15.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16                #添加pod网段不然一会安装flann组建会报错
scheduler: {}

8、初始化master节点,注意要指定 刚刚导出来的文件

[[email protected] ~]# kubeadm init --config kubeadm-config.yml 
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.97]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-node01 localhost] and IPs [192.168.0.97 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-node01 localhost] and IPs [192.168.0.97 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.501771 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-node01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-node01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

"#下面是node节点加入是所需要执行的命令,记住一定要保存好"
kubeadm join 192.168.0.97:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:3c031e0510e86df66b34f7459e4319db9652aaf6d3b47823b501f5ef5af1a99b 

9、配置API所需的配置文件

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

10、安装集群网络组件flannel

[[email protected] ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml

回显信息
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

或安装cannl
]# kubectl apply -f https://docs.projectcalico.org/v3.manifests/canal.yaml
configmap/canal-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-calico created
daemonset.apps/canal created
serviceaccount/canal created

11、查看集群信息

[[email protected] ~]# kubectl  get node 
NAME         STATUS   ROLES    AGE   VERSION
k8s-node01   Ready    master   10m   v1.15.0
[[email protected] ~]# kubectl  get pod -n kube-system 
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-57c7898648-5gtqz             1/1     Running   0          10m
coredns-57c7898648-qthrx             1/1     Running   0          10m
etcd-k8s-node01                      1/1     Running   0          10m
kube-apiserver-k8s-node01            1/1     Running   0          9m55s
kube-controller-manager-k8s-node01   1/1     Running   0          10m
kube-flannel-ds-amd64-pq6fv          1/1     Running   0          3m54s
kube-proxy-lqlkv                     1/1     Running   0          10m
kube-scheduler-k8s-node01            1/1     Running   0          9m59s

12、上面有一个WARNING解决方法

1、[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

2、修改或创建/etc/docker/daemon.json,加入下面的内容:
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

3、重启docker:

systemctl restart docker
systemctl status docker

4、重置集群
kubectl reset 

5、重新初始化集群
kubeadm init --config kubeadm-config.yml

"没有出现刚才的warnig表示已经解决"

二、node 节点加入集群步骤

1、yum源配置和master一样,需要安装的包如下:docker,kubeadm,kubetel

2、关闭防火墙,seLinux、swap

1、配置yum源

[[email protected] ~]# ls  /etc/yum.repos.d/
CentOS7-Base.repo  docker-ce.repo  kubeadm-config.yml  kubernetes.repo
[[email protected] ~]# scp /etc/yum.repos.d/* k8s-node01:/etc/yum.repos.d/
#node01:更新yum源
[[email protected] ~]# yum makecache 

2、安装所需软件

#1、安装kubernetes相关组件
 yum install kubeadm kubelet 
#2、安装docker
 yum list docker-ce --showduplicates | sort -r #列出docker版本
 "回显信息如下"
 * updates: mirror.centos.org
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror, langpacks
Installed Packages
 * extras: mirror.centos.org
docker-ce.x86_64            3:18.09.8-3.el7                    docker-ce-stable 
docker-ce.x86_64            3:18.09.8-3.el7                    @docker-ce-stable
docker-ce.x86_64            3:18.09.7-3.el7                    docker-ce-stable 
docker-ce.x86_64            3:18.09.6-3.el7                    docker-ce-stable 
......
#3、安装docker
    yum install docker

3、启动服务

#启动docker,并设置成开机自启
    systemctl start docker
    systemctl enable docker   #这一步不做的话加入集群会有告警,但是并不会影响
#将kubetel设置成开机自启即可,切记不要启动服务!
    systemctl enbale kubetel  #这一步不做的话加入集群会有告警,但是并不会影响

4、关闭swap

# swapoff -a
# lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    253:0    0  40G  0 disk 
├─vda1 253:1    0   4G  0 part 
└─vda2 253:2    0  36G  0 part /

cat /etc/fstab 
#
# /etc/fstab
# Created by anaconda on Wed May 29 10:22:23 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=cd57b66f-58d9-4a4c-8acd-f5b51fb0bfc7 /                       ext4    defaults        1 1
#UUID=c45fd23f-2d60-4474-9e8d-1e329573fb26 swap                    swap    defaults        0 0

5、加入集群

kubeadm join 192.168.0.97:6443 --token abcdef.0123456789abcdef   --discovery-token-ca-cert-hash sha256:3c031e0510e86df66b34f7459e4319db9652aaf6d3b47823b501f5ef5af1a99b
"回显信息如下即表示加入成功,只需要等待镜像下载完,而后自动启动容器即可"
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

6、最后master上查看集群信息

[[email protected] ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   8m36s   v1.15.0
k8s-node02   Ready    <none>   106s    v1.15.1

扫码领视频副本.gif

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号