kubeadm部署k8s

Published on with 0 views and 0 comments

修改hostname

hostnamectl set-hostname k8s-master
cat /etc/hostname

Docker安装

Docker提供两个版本,docker-ce(Community Edition: 社区版),ee(Enterprise Edition: 企业版),我们使用社区版。

CentOS

curl -fsSL https://get.docker.com/ | sh -s docker --mirror Aliyun
systemctl enable docker
systemctl start docker

查看可安装的版本

 yum list docker-ce.x86_64 --showduplicates | sort -r

Ubuntu

apt-get update
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-cache madison docker-ce
#安装指定版
#apt-get install docker-ce=17.12.0~ce-0~ubuntu
#安装最新版
apt-get install docker-ce
systemctl enable docker
systemctl start docker
cat <<EOF >> /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl restart docker

安装k8s

安装kubeadm套件

kubeadm、kubelet、kubectl。

使用yum安装,首先配置阿里云yum源:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装某个版本号的kube套件:

kube_version=1.14.3
yum install -y kubelet-${kube_version}-0 kubeadm-${kube_version}-0 kubectl-${kube_version}-0

如果安装最新版本可以使用:

yum install -y kubelet kubeadm kubectl

启动:

systemctl enable kubelet && systemctl start kubelet

关闭swap、selinux、firewalld

关闭swap:

echo "vm.swappiness = 0">> /etc/sysctl.conf
sysctl -w vm.swappiness=0
swapoff -a
sysctl -p
echo "swapoff -a">>/etc/rc.d/rc.local
chmod +x /etc/rc.d/rc.local

关闭selinux:

setenforce 0
selinux_setline=`sed -n "/^SELINUX=/=" /etc/selinux/config`
sed -i "${selinux_setline} d" /etc/selinux/config
sed -i "${selinux_setline} iSELINUX=disabled" /etc/selinux/config

关闭firewalld:

systemctl stop firewalld
systemctl disable firewalld

修改网络配置(网桥的问题):

echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf

sysctl -p

如果上面步骤出问题,开启模块:

modprobe br_netfilter

管理节点初始化配置文件

kubeadm-config.yml

apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.3
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
apiServer:
  certSANs:
  - 172.19.20.71
  - 172.19.20.72
  - 172.19.20.73
  - 172.19.20.74
  - 172.19.20.75
  - node1
  - node2
  - node3
  - node4
  - node5
  - 127.0.0.1
networking:
  podSubnet: 10.244.0.0/16

apiServer.certSANs中填入集群可能会作为管理节点的服务器IP、域名、hostname,不在此内的服务器可以加入集群作为普通节点,不能作为管理节点

执行命令进行初始化:

kubeadm init --config=kubeadm-config.yml

初始化完成之后会有消息提示执行:

sudo mkdir -p ~/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装过程中出现[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

安装过程中出现[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2,是因为虚拟机只分配了1个CPU,忽略掉这种类型的错误就行

kubeadm init --config kubeadm_config.yml --ignore-preflight-errors=NumCPU

安装网络插件

可选flannel、calico、weave-net等,我们使用weave-net。

直接执行命令安装:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

加入节点

集群初始化完成之后会给一条 kubeadm join ...的命令,如果忘记了,可以执行命令

kubeadm token create --print-join-command

重新获取,在其他节点上执行命令即可把节点加入集群。

执行:

kubectl get node

kubernetes配置

命令补全

source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

CentOS下如果自动补全出错,安装bash-completion

yum -y install bash-completion

创建命名空间

apiVersion: v1
kind: Namespace
metadata:
  name: test

kubectl apply -f namespace.yml 创建一个名为test的命名空间

为命名空间设置资源配额

apiVersion: v1
kind: ResourceQuota
metadata:
  name: mem-cpu-demo
spec:
  hard:
    requests.cpu: "4"
    requests.memory: 16Gi
    limits.cpu: "5"
    limits.memory: 20Gi

让pod可以在master上调度(但是并不建议这么做)

kubectl taint nodes --all node-role.kubernetes.io/master-

安装ingress插件

三个文件mandatory.yml/ingress-nginx-service.yml/default-backend.yml内容分别如下:
mandatory.yml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              hostPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              hostPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10

---

ingress-nginx-service.yml

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 32080  #http
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 32443  #https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

default-backend.yml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: default-http-backend
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: default-http-backend
          image: k8s.gcr.io/defaultbackend-amd64:1.5
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 10m
              memory: 20Mi
            requests:
              cpu: 10m
              memory: 20Mi

---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx

---



k8s.gcr.io/defaultbackend-amd64:1.5,国内下载不到,需要翻墙

让ingress-nginx外部可见

给某个节点打标签

kubectl label nodes worker node-role.kubernetes.io/ingress="true"

修改ingress的deployment,增加nodeSelect

...
spec:
...
template:
...
spec:
 nodeSelector: 
   node-role.kubernetes.io/ingress: "true"
 ...

使用共享存储(nfs为主)

  1. 创建nfs共享存储并暴露服务
mkdir -p /public/nfs
chmod 777 /public/nfs
yum install -y nfs-utils
systemctl enable nfs
echo "/public/nfs *(async,rw,no_root_squash,no_subtree_check)" > /etc/exports
exportfs -r
systemctl start nfs
showmount -e localhost

如果显示:

Export list for localhost:
/public/nfs *

说明创建成功。

  1. 创建nfs驱动

进入k8s/nfs目录(可以自定义目录,各种不同功能的文件最好单独存放)

mkdir -p ${k8s_root}/nfs-sc
cd ${k8s_root}/nfs-sc

创建授权(nfs-rbac.yaml):

kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

创建驱动(nfs-client-provisioner.yml):

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s.io/nfs-provisioner
            - name: NFS_SERVER
              value: 192.168.31.100
            - name: NFS_PATH
              value: /public/nfs
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.31.100
            path: /public/nfs

创建动态存储卷(storageClass.yml):

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  name: nfs-storage-class
provisioner: k8s.io/nfs-provisioner
parameters:
  archiveOnDelete: "true"

实际项目中修改文件中的 192.168.31.100为服务器的ip、/public/nfs修改为实际的共享目录即可,不需要指定namespace。

kubectl apply -f .

标题:kubeadm部署k8s
作者:fyzzz
地址:https://fyzzz.cn/articles/2020/07/28/1595939297544.html