首页IT科技k8s service ingress 关联(【kubernetes】k8s集群搭建(完整详解))

k8s service ingress 关联(【kubernetes】k8s集群搭建(完整详解))

时间2025-04-30 04:30:41分类IT科技浏览3356
导读:目录...

目录

一          、准备工作

二                  、配置

1      、修改主机名

2       、修改hosts文件

3                 、关闭防火墙和SELinux

4         、关闭swap

5     、修改网卡配置

6                、系统模块配置

7            、免密登录

8   、安装k8s和docker

9                、查看k8s集群需要的镜像版本

10               、初始化Master节点

11、node配置

12             、拉取Nginx镜像进行配置

一                  、准备工作

环境基于Redhat8.5版本

1   、准备三台虚拟机            ,IP地址为

master:192.168.10.129

node1:192.168.10.134

node2:192.168.10.136

也可以在一台上做                ,然后克隆另外俩台      ,修改主机名

二          、配置

1                  、修改主机名

#在主节点的虚拟机 [root@mgr1 ~]# hostnamectl set-hostname k8s-master #在node1的虚拟机 [root@mgr2 ~]# hostnamectl set-hostname k8s-node1 #在node2的虚拟机 [root@mgr3 ~]# hostnamectl set-hostname k8s-node2

2      、修改hosts文件

[root@k8s-master ~]# vim /etc/hosts 192.168.10.129 k8s-master 192.168.10.134 k8s-node1 192.168.10.136 k8s-node2 #在node1         ,node2也是上面一样配置

3       、关闭防火墙和SELinux

#三台都做 [root@k8s-master ~]# setenforce 0 [root@k8s-node1 ~]# systemctl stop firewalld.service [root@k8s-node1 ~]# systemctl disable firewalld.service

4                 、关闭swap

#三台节点都做                ,注释掉包含swap这一行 [root@k8s-master ~]# vim /etc/fstab #/dev/mapper/rhel-swap none swap defaults 0 0

5         、修改网卡配置

#三台节点都做 [root@k8s-master ~]# vim /etc/sysctl.d/kubernetes.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 [root@k8s-master ~]# sysctl -p

6     、系统模块配置

#三台节点都做 [root@k8s-master ~]# modprobe br_netfilter #加载系统模块 [root@k8s-master ~]# lsmod | grep br_netfilter

7                、免密登录

#三台都做 [root@k8s-master ~]ssh-keygen [root@k8s-master ~]ssh-copy-id root@192.168.10.129 [root@k8s-master ~]ssh-copy-id root@192.168.10.134 [root@k8s-master ~]ssh-copy-id root@192.168.10.136

8            、安装k8s和docker

#三个节点都需要做 #配置yum源 [root@k8s-master ~]# cd /etc/yum.repos.d/ [root@k8s-master yum.repos.d]# cat k8s.repo [k8s] name=k8s baseurl=http://mirrors.ustc.edu.cn/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=0 [root@k8s-master yum.repos.d]# mount /dev/sr0 /mnt/ [root@master yum.repos.d]# wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo sed -i s+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+ /etc/yum.repos.d/docker-ce.repo dnf install -y yum-utils device-mapper-persistent-data lvm2 dnf install -y kubelet kubeadm kubectl dnf remove podman -y dnf install -y docker-ce dnf install -y iproute-tc yum-utils device-mapper-persistent-data lvm2 kubelet-1.21.3 kubeadm-1.21.3 kubectl-1.21.3 docker-ce #初始化 systemctl enable kubelet systemctl enable --now docker

修改docker配置文件

[root@k8s-master ~]# systemctl start docker #docker初始化 [root@k8s-master ~]# vim /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": ["https://8zs3633v.mirror.aliyuncs.com"] } [root@k8s-master ~]# systemctl restart docker [root@k8s-master ~]# systemctl enable docker.service

9   、查看k8s集群需要的镜像版本

注意:三个节点都需要做

[root@k8s-master yum.repos.d]# kubeadm config images list

再拉取这些镜像        ,执行一下步骤

docker pull kittod/kube-apiserver:v1.21.3 docker pull kittod/kube-controller-manager:v1.21.3 docker pull kittod/kube-scheduler:v1.21.3 docker pull kittod/kube-proxy:v1.21.3 docker pull kittod/pause:3.4.1 docker pull kittod/etcd:3.4.13-0 docker pull kittod/coredns:v1.8.0 docker pull kittod/flannel:v0.14.0

拉取完成后执行改变coredns的标记不然后面会找不到镜像      ,执行一下步骤

docker tag kittod/kube-apiserver:v1.21.3 k8s.gcr.io/kube-apiserver:v1.21.3 docker tag kittod/kube-controller-manager:v1.21.3 k8s.gcr.io/kube-controller-manager:v1.21.3 docker tag kittod/kube-scheduler:v1.21.3 k8s.gcr.io/kube-scheduler:v1.21.3 docker tag kittod/kube-proxy:v1.21.3 k8s.gcr.io/kube-proxy:v1.21.3 docker tag kittod/pause:3.4.1 k8s.gcr.io/pause:3.4.1 docker tag kittod/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0 docker tag kittod/coredns:v1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0 docker tag kittod/flannel:v0.14.0 quay.io/coreos/flannel:v0.14.0

10                、初始化Master节点

kubeadm init \ --kubernetes-version=v1.21.3 \ --pod-network-cidr=10.244.0.0/16 \ --service-cidr=10.96.0.0/12 \ --apiserver-advertise-address=192.168.10.129 \

如果出现这个错误:

    [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

加上这个    --ignore-preflight-errors=all 

如果还不成功:

     systemctl stop kubelet

     rm -rf /etc/kubernetes/*

     systemctl stop docker 

        如果停止失败

            reboot

        docker container prune

        docer ps -a

        如果没有容器                 ,就说明删干净了

     rm -rf /var/lib/kubelet/

     rm -rf /var/lib/etcd

初始化成功后会出现以下内容

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.129:6443 --token xwvae1.05gyglinbz62ui3i \

    --discovery-token-ca-cert-hash sha256:f2701ff3276b5c260900314f3871ba5593107809b62d741c05f452caad62ffa8

根据提示          ,操作

[root@k8s-master ~]# mkdir -p $HOME/.kube [root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config [root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

11               、node配置

#俩node节点上做 [root@k8s-node1 ~]# kubeadm join 192.168.10.129:6443 --token xwvae1.05gyglinbz62ui3i --discovery-token-ca-cert-hash sha256:f2701ff3276b5c260900314f3871ba5593107809b62d741c05f452caad62ffa8

如果加入失败:

    1、kubeadm reset   -y

    2             、

    rm -rf /etc/kubernetes/kubelet.conf

    rm -rf /etc/kubernetes/pki/ca.crt

    systemctl restart kubelet

1) 在master上查看节点状态

[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady control-plane,master 3m48s v1.21.3 k8s-node1 NotReady <none> 3m9s v1.21.3 k8s-node2 NotReady <none> 3m6s v1.21.3

发现此时是notready

2)查看集群pod状态

[root@k8s-master ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-558bd4d5db-46dzt 0/1 Pending 0 4m54s coredns-558bd4d5db-vpqgl 0/1 Pending 0 4m54s etcd-k8s-master 1/1 Running 0 5m7s kube-apiserver-k8s-master 1/1 Running 0 5m7s kube-controller-manager-k8s-master 1/1 Running 0 5m7s kube-proxy-bjxgt 1/1 Running 0 4m32s kube-proxy-bmjnz 1/1 Running 0 4m54s kube-proxy-z6jzl 1/1 Running 0 4m29s kube-scheduler-k8s-master 1/1 Running 0 5m7s

3)再看节点上查看日志

[root@k8s-master ~]# journalctl -f -u kubelet -- Logs begin at Fri 2022-09-30 16:34:04 CST. -- Sep 30 17:57:25 k8s-master kubelet[23732]: E0930 17:57:25.097653 23732 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" Sep 30 17:57:28 k8s-master kubelet[23732]: I0930 17:57:28.035887 23732 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d" Sep 30 17:57:30 k8s-master kubelet[23732]: E0930 17:57:30.104181 23732 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"

出现了个错误

        runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

解决问题:

        它说网络未准备好   ,缺少一个网络插件                  ,那就安装一个 #在master上输入 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

再一次检查

[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 45h v1.21.3 k8s-node1 Ready <none> 45h v1.21.3 k8s-node2 Ready <none> 45h v1.21.3

master和node都ready了

12                  、拉取Nginx镜像进行配置

注意:只在master上做

docker pull nginx #重新标记 docker tag nginx:latest kittod/nginx:1.21.5

创建部署

kubectl create deployment nginx --image=kittod/nginx:1.21.5

暴露端口

kubectl expose deployment nginx --port=80 --type=NodePort

查看pod和服务

[root@k8s-master ~]# kubectl get pods,service NAME READY STATUS RESTARTS AGE pod/nginx-8675954f95-cvkvz 1/1 Running 0 2m20s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46h service/nginx NodePort 10.98.220.104 <none> 80:30288/TCP 2m10s

查看映射的随机端口

[root@k8s-master ~]# netstat -lntup | grep kube-proxy tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 3834/kube-proxy tcp 0 0 0.0.0.0:30288 0.0.0.0:* LISTEN 3834/kube-proxy tcp6 0 0 :::10256 :::* LISTEN 3834/kube-proxy

测试Nginx服务

完成          。 

创心域SEO版权声明:以上内容作者已申请原创保护,未经允许不得转载,侵权必究!授权事宜、对本内容有异议或投诉,敬请联系网站管理员,我们将尽快回复您,谢谢合作!

展开全文READ MORE
双数组trie算法解析(Darts, 双数组Trie 文字处理技术 STPDomain Powered by Discuz!) linux如何修改文本内容(Linux中安装使用semanage来修改文本的教程)