Infrastructure
Documentation for system infrastructure setup and configuration
Create Kubernetes cluster high available
Infrastructure Documentation
This section provides detailed instructions for setting up and configuring the infrastructure components required for the system deployment.
Install Docker runtime click here
Setup Cgroup Driver for Docker click here
sudo mkdir /etc/docker cat <<EOF | sudo tee /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF sudo systemctl enable docker sudo systemctl daemon-reload sudo systemctl restart dockerTurn off swap:
- swapoff -a: Disable swap at current terminal
- Comment line swap trong /etc/fstab: Auto disable swap every reboot
Install kubeadm, kubelet, kubectl: click here
Setup HA Proxy Click here
global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats timeout 30s user haproxy group haproxy daemon defaults log global option tcplog mode http #option httplog option dontlognull timeout connect 10s timeout client 30s timeout server 30s # list server is configure in nginx controller externalIPs listen lets-encrypt-http-resolver2 bind *:80 mode http maxconn 8 stats uri /haproxy?stats balance roundrobin server v127 192.168.0.127:80 check fall 3 rise 2 check-send-proxy inter 10s send-proxy # list server is configure in nginx controller externalIPs listen k8s-nginx-ingress bind *:443 mode tcp maxconn 10000 timeout tunnel 600s balance roundrobin option tcp-check server v127 192.168.0.127:443 check fall 3 rise 2 check-send-proxy inter 10s send-proxy # list server master node listen k8s-api-server bind *:7443 mode tcp timeout connect 5s timeout client 24h timeout server 24h server v117 192.168.0.117:6443 check fall 3 rise 2 server v118 192.168.0.118:6443 check fall 3 rise 2 server v119 192.168.0.119:6443 check fall 3 rise 2Setup Keepalived Click here
- Check Network Interface: ens160 or br0 or eth0
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.117 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fe6a:941c prefixlen 64 scopeid 0x20<link> ether 00:0c:29:6a:94:1c txqueuelen 1000 (Ethernet) RX packets 1157858 bytes 430863324 (430.8 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1036082 bytes 186847619 (186.8 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 - Setup MASTER and BACKUP, check priority
global_defs { enable_script_security script_user root root #USER router_id lb01 } vrrp_script chk_haproxy { script "/usr/bin/killall -0 haproxy" interval 2 weight 2 } vrrp_instance VI_1 { virtual_router_id 52 advert_int 1 priority 100 # MASTER > BACKUP: 100 > 99 > 98 state MASTER # MASTER OR BACKUP interface ens160 # NETWORK INTERFACE unicast_src_ip 192.168.0.117 # The IP address of this machine unicast_peer { 192.168.0.118 # IP address of peer 192.168.0.119 # IP address of peer } virtual_ipaddress { 192.168.0.200 dev ens160 # THe virual address } authentication { auth_type PASS auth_pass 1111 } track_script { chk_haproxy } }
- Check Network Interface: ens160 or br0 or eth0
Verify Keepalived
- Check ip address 192.168.0.200:
ip a s
ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:0c:29:6a:94:1c brd ff:ff:ff:ff:ff:ff inet 192.168.0.117/24 brd 192.168.0.255 scope global ens160 valid_lft forever preferred_lft forever inet 192.168.0.200/32 scope global ens160 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe6a:941c/64 scope link valid_lft forever preferred_lft forever- Stop HA Proxy and check ip with other machine
- Check log keepalived:
journalctl -u keepalived.service -f
- Check ip address 192.168.0.200:
Create high available cluster
- First master
#"LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" # Add option --pod-network-cidr=10.244.0.0/16 for install Plannel CNI network # https://github.com/flannel-io/flannel/blob/master/Documentation/kubernetes.md sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "192.168.0.200:7443" --upload-certs - Master join:
kubeadm join 192.168.0.200:7443 --token ${token} \ --discovery-token-ca-cert-hash ${token-ca-cert} \ --control-plane --certificate-key ${certificate} - Worker join:
kubeadm join 192.168.0.200:7443 --token ${token} \ --discovery-token-ca-cert-hash ${token-ca-cert} \ - Setup local kube config
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
- First master
Apply CNI Plugin
- Flannel: (Rook Ceph required)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml- Weave
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"Update primary HA Proxxy: (Optional)
listen k8s-new-api-server bind *:8443 mode tcp timeout connect 5s timeout client 24h timeout server 24h server p200 192.168.0.200:7443 check fall 3 rise 2 # NEW HAPROXYRenew certificate when public cluster click here
#UPDATE CERTIFICATE WITH EACH MASTER rm /etc/kubernetes/pki/apiserver.* kubeadm init phase certs apiserver --apiserver-advertise-address 192.168.0.200 --apiserver-cert-extra-sans 115.79.213.25 kubeadm certs renew admin.confInstall Rook Ceph click here
- Clone source
git clone --single-branch --branch v1.6.7 https://github.com/rook/rook.git- Deploy the Rook Operator
cd cluster/examples/kubernetes/ceph kubectl create -f crds.yaml -f common.yaml -f operator.yaml # verify the rook-ceph-operator is in the `Running` state before proceeding kubectl -n rook-ceph get pod- Create a Rook Ceph Cluster
kubectl create -f cluster.yamlIntergrate Gitlab Kubernetes (Optional) click here
Install Helm Click here
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add - sudo apt-get install apt-transport-https --yes echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list sudo apt-get update sudo apt-get install helm -yInstall Cert-Manager Click here
- Install Helm
- Create namespace
kubectl create namespace cert-manager- Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io helm repo update- Install Cert Manager
helm -n cert-manager install cert-manager jetstack/cert-manager --set ingressShim.defaultIssuerName=letsencrypt-prod --set ingressShim.defaultIssuerKind=ClusterIssuer --set installCRDs=true --version v1.4.0- Create Cluster Issuer
# cluster-issuer-staging.yaml apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: email: master@vdatlab.com server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-staging solvers: - http01: ingress: class: nginxkubectl apply -f cluster-issuer-staging.yaml- Check Cluster Issuer
kubectl get clusterissuers.cert-manager.io | grep True
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.