@@ -839,23 +839,209 @@ sudo chmod +x /usr/local/bin/docker-compose
839839
840840#### 开始安装 - Kubernetes 1.13.2 版本
841841
842+ - 三台机子:
843+ - master-1:` 192.168.0.127 `
844+ - node-1:` 192.168.0.128 `
845+ - node-2:` 192.168.0.129 `
842846- 官网最新版本:< https://github.com/kubernetes/kubernetes/releases >
843847- 官网 1.13 版本的 changelog:< https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md >
844848- 所有节点安装 Docker 18.06,并设置阿里云源
849+ - 可以参考:[ 点击我o(∩_ ∩)o ] ( https://github.com/judasn/Linux-Tutorial/blob/master/favorite-file/shell/install_docker_k8s_disable_firewalld_centos7-aliyun.sh )
850+ - 核心,查看可以安装的 Docker 列表:` yum list docker-ce --showduplicates `
845851- 所有节点设置 kubernetes repo 源,并安装 Kubeadm、Kubelet、Kubectl 都设置阿里云的源
846852- Kubeadm 初始化集群过程当中,它会下载很多的镜像,默认也是去 Google 家里下载。但是 1.13 新增了一个配置:` --image-repository ` 算是救了命。
853+ - 具体流程:
854+
855+ ```
856+ 主机时间同步
857+ systemctl start chronyd.service
858+ systemctl enable chronyd.service
859+
860+ systemctl stop firewalld.service
861+ systemctl disable firewalld.service
862+ systemctl disable iptables.service
863+
864+
865+ setenforce 0
866+
867+ sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
868+
869+ swapoff -a && sysctl -w vm.swappiness=0
870+
871+
872+ hostnamectl --static set-hostname k8s-master-1
873+ hostnamectl --static set-hostname k8s-node-1
874+ hostnamectl --static set-hostname k8s-node-2
875+
876+
877+ vim /etc/hosts
878+ 192.168.0.127 k8s-master-1
879+ 192.168.0.128 k8s-node-1
880+ 192.168.0.129 k8s-node-2
881+
882+ master 免密
883+ 生产密钥对
884+ ssh-keygen -t rsa
885+
886+
887+ 公钥内容写入 authorized_keys
888+ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
889+
890+ 测试:
891+ ssh localhost
892+
893+ 将公钥复制到其他机子
894+ ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@k8s-node-1(根据提示输入 k8s-node-1 密码)
895+ ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@k8s-node-2(根据提示输入 k8s-node-2 密码)
896+
897+
898+ 在 linux01 上测试
899+ ssh k8s-master-1
900+ ssh k8s-node-1
901+ ssh k8s-node-2
902+
903+
904+
905+ vim /etc/yum.repos.d/kubernetes.repo
906+
907+ [kubernetes]
908+ name=Kubernetes
909+ baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
910+ enabled=1
911+ gpgcheck=1
912+ repo_gpgcheck=1
913+ gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
914+
915+
916+ scp -r /etc/yum.repos.d/kubernetes.repo root@k8s-node-1:/etc/yum.repos.d/
917+ scp -r /etc/yum.repos.d/kubernetes.repo root@k8s-node-2:/etc/yum.repos.d/
918+
919+
920+ 所有机子
921+ yum install -y kubelet-1.13.2 kubeadm-1.13.2 kubectl-1.13.2 --disableexcludes=kubernetes
922+
923+
924+ vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
925+ 最后一行添加:Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
926+
927+
928+ systemctl enable kubelet && systemctl start kubelet
929+
930+ kubeadm version
931+ kubectl version
932+
933+
934+ vim /etc/sysctl.d/k8s.conf
935+ net.bridge.bridge-nf-call-ip6tables = 1
936+ net.bridge.bridge-nf-call-iptables = 1
937+
938+ sysctl --system
939+
940+ ```
941+
942+ - 初始化 master 节点:
943+
944+ ```
945+
946+ 推荐:
947+ kubeadm init \
948+ --image-repository registry.aliyuncs.com/google_containers \
949+ --pod-network-cidr 10.244.0.0/16 \
950+ --kubernetes-version 1.13.2 \
951+ --service-cidr 10.96.0.0/12 \
952+ --apiserver-advertise-address=0.0.0.0 \
953+ --ignore-preflight-errors=Swap
954+
955+ 10.244.0.0/16是 flannel 插件固定使用的ip段,它的值取决于你准备安装哪个网络插件
956+
957+ 终端会输出核心内容:
958+ Your Kubernetes master has initialized successfully!
959+
960+ To start using your cluster, you need to run the following as a regular user:
961+
962+ mkdir -p $HOME/.kube
963+ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
964+ sudo chown $(id -u):$(id -g) $HOME/.kube/config
965+
966+ You should now deploy a pod network to the cluster.
967+ Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
968+ https://kubernetes.io/docs/concepts/cluster-administration/addons/
969+
970+ You can now join any number of machines by running the following on each node
971+ as root:
972+
973+ kubeadm join 192.168.0.127:6443 --token 53mly1.yf9llsghle20p2uq --discovery-token-ca-cert-hash sha256:a9f26eef42c30d9f4b20c52058a2eaa696edc3f63ba20be477fe1494ec0146f7
974+
975+
976+
977+
978+ 也可以使用另外一个流行网络插件 calico:
979+ kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=192.168.0.0/16 --kubernetes-version v1.13.2
980+
981+
982+
983+ mkdir -p $HOME/.kube
984+ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
985+ sudo chown $(id -u):$(id -g) $HOME/.kube/config
986+
987+ export KUBECONFIG=/etc/kubernetes/admin.conf
988+ echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.zshrc
989+ source ~/.zshrc
990+
991+
992+ 查询我们的 token
993+ kubeadm token list
994+
995+
996+ kubectl cluster-info
997+ ```
998+
999+ - 到 node 节点进行加入:
1000+
1001+ ```
1002+
1003+ kubeadm join 192.168.0.127:6443 --token 53mly1.yf9llsghle20p2uq --discovery-token-ca-cert-hash sha256:a9f26eef42c30d9f4b20c52058a2eaa696edc3f63ba20be477fe1494ec0146f7
1004+
1005+
1006+ 在 master 节点上:kubectl get cs
1007+ NAME STATUS MESSAGE ERROR
1008+ controller-manager Healthy ok
1009+ scheduler Healthy ok
1010+ etcd-0 Healthy {"health": "true"}
1011+ 结果都是 Healthy 则表示可以了,不然就得检查。必要时可以用:`kubeadm reset` 重置,重新进行集群初始化
1012+
1013+
1014+
1015+ master 安装 Flannel
1016+ cd /opt && wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
1017+
1018+ kubectl apply -f /opt/kube-flannel.yml
1019+
1020+
1021+ 验证:
1022+ kubectl get pods --all-namespaces
1023+ kubectl get nodes
1024+ 如果还是 NotReady,则查看错误信息:
1025+ kubectl describe pod kube-scheduler-master.hanli.com -n kube-system
1026+ kubectl logs kube-scheduler-master.hanli.com -n kube-system
1027+ tail -f /var/log/messages
1028+
1029+ ```
1030+
1031+
1032+
8471033
8481034#### 主要概念
8491035
8501036- Master 节点,负责集群的调度、集群的管理
851- - 常见组件:
1037+ - 常见组件:< https://kubernetes.io/docs/concepts/overview/components/ >
8521038 - kube-apiserver:API服务
8531039 - kube-scheduler:调度
8541040 - Kube-Controller-Manager:容器编排
8551041 - Etcd:保存了整个集群的状态
8561042 - Kube-proxy:负责为 Service 提供 cluster 内部的服务发现和负载均衡
8571043 - Kube-DNS:负责为整个集群提供 DNS 服务
858- - Workers 节点,负责容器相关的处理
1044+ - node 节点,负责容器相关的处理
8591045
8601046- ` Pods `
8611047
0 commit comments