Skip to content

Commit 50cd51d

Browse files
committedDec 21, 2020
Added part 6 - preapre the worker nodes
1 parent d639c95 commit 50cd51d

File tree

1 file changed

+273
-0
lines changed

1 file changed

+273
-0
lines changed
 

‎06-Worker-Nodes.md

+273
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,273 @@
1+
# Bootstrapping the Kubernetes Worker Nodes
2+
3+
All the commands need to be run on each of the worker nodes (`p1` and `p2` in this case)
4+
5+
## Prerequisites
6+
7+
Install `socat`, `conntrack` and `ipset`
8+
9+
```shell
10+
sudo apt-get update
11+
sudo apt-get -y install socat conntrack ipset
12+
```
13+
14+
Enable `systemd-resolved` for DNS.
15+
16+
```shell
17+
sudo systemctl enable systemd-resolved.service
18+
sudo systemctl start systemd-resolved.service
19+
```
20+
21+
Disable Swap
22+
23+
As per [this guide](https://www.paulcourt.co.uk/article/pi-swap), there are few commands to run in order to disable the swap permanently.
24+
25+
```shell
26+
sudo dphys-swapfile swapoff
27+
sudo dphys-swapfile uninstall
28+
sudo update-rc.d dphys-swapfile remove
29+
sudo rm -f /etc/init.d/dphys-swapfile
30+
31+
sudo service dphys-swapfile stop
32+
sudo systemctl disable dphys-swapfile.service
33+
```
34+
35+
Enable `br_netfilter` kernel module. This is used by `kube-proxy`.
36+
37+
```shell
38+
echo br_netfilter | sudo tee -a /etc/modules
39+
```
40+
41+
After these steps `reboot` the node.
42+
43+
## Install the worker binaries
44+
45+
```shell
46+
sudo mkdir -p \
47+
/etc/cni/net.d \
48+
/opt/cni/bin \
49+
/var/lib/kubelet \
50+
/var/lib/kube-proxy \
51+
/var/lib/kubernetes \
52+
/var/run/kubernetes
53+
54+
sudo mv bin/containerd* bin/ctr /bin/
55+
sudo mv bin/crictl bin/kube* bin/runc bin/recvtty /usr/local/bin/
56+
sudo mv bin/* /opt/cni/bin/
57+
```
58+
59+
## Configure CNI Networking
60+
61+
Pod CIDR is `10.200.${i}.0/24` where `${i}` is the sequence of the worker node. In this case, for the Pi 1 is `10.200.0.0/24` and for the Pi 2 is `10.200.1.0/24`. Choose accordingly one of the following for the appropriated worker node.
62+
63+
```shell
64+
POD_CIDR=10.200.0.0/24
65+
POD_CIDR=10.200.1.0/24
66+
```
67+
68+
```shell
69+
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
70+
{
71+
"cniVersion": "0.3.1",
72+
"name": "bridge",
73+
"type": "bridge",
74+
"bridge": "cnio0",
75+
"isGateway": true,
76+
"ipMasq": true,
77+
"ipam": {
78+
"type": "host-local",
79+
"ranges": [
80+
[{"subnet": "${POD_CIDR}"}]
81+
],
82+
"routes": [{"dst": "0.0.0.0/0"}]
83+
}
84+
}
85+
EOF
86+
```
87+
88+
```shell
89+
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
90+
{
91+
"cniVersion": "0.3.1",
92+
"name": "lo",
93+
"type": "loopback"
94+
}
95+
EOF
96+
```
97+
98+
## Configure containerd
99+
100+
Important attention to `titilambert/armv6-pause:latest`, the original `k8s.gcr.io/pause` doesn’t work on ARMv6, so I found an alternative already published that worked for me. You can always build the image yourself from the source once compiled the `pause` binary but I didn’t go that far.
101+
102+
```shell
103+
sudo mkdir -p /etc/containerd/
104+
```
105+
106+
```shell
107+
cat << EOF | sudo tee /etc/containerd/config.toml
108+
[plugins]
109+
[plugins.cri]
110+
sandbox_image = "docker.io/titilambert/armv6-pause:latest"
111+
[plugins.cri.containerd]
112+
snapshotter = "overlayfs"
113+
[plugins.cri.containerd.default_runtime]
114+
runtime_type = "io.containerd.runtime.v1.linux"
115+
runtime_engine = "/usr/local/bin/runc"
116+
runtime_root = ""
117+
EOF
118+
```
119+
120+
```shell
121+
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
122+
[Unit]
123+
Description=containerd container runtime
124+
Documentation=https://containerd.io
125+
After=network.target
126+
127+
[Service]
128+
ExecStartPre=/sbin/modprobe overlay
129+
ExecStart=/bin/containerd
130+
Restart=always
131+
RestartSec=5
132+
Delegate=yes
133+
KillMode=process
134+
OOMScoreAdjust=-999
135+
LimitNOFILE=1048576
136+
LimitNPROC=infinity
137+
LimitCORE=infinity
138+
139+
[Install]
140+
WantedBy=multi-user.target
141+
EOF
142+
```
143+
144+
This is more for troubleshooting, but it's handy to have it already configured in case it's needed.
145+
```shell
146+
cat << EOF | sudo tee /etc/crictl.yaml
147+
runtime-endpoint: unix:///run/containerd/containerd.sock
148+
image-endpoint: unix:///run/containerd/containerd.sock
149+
timeout: 3
150+
debug: true
151+
pull-image-on-create: false
152+
EOF
153+
```
154+
155+
## Configure the Kubelet
156+
157+
```shell
158+
sudo mv certs/${HOSTNAME}-key.pem certs/${HOSTNAME}.pem /var/lib/kubelet/
159+
sudo mv config/${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
160+
sudo mv certs/ca.pem /var/lib/kubernetes/
161+
```
162+
163+
```shell
164+
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
165+
kind: KubeletConfiguration
166+
apiVersion: kubelet.config.k8s.io/v1beta1
167+
authentication:
168+
anonymous:
169+
enabled: false
170+
webhook:
171+
enabled: true
172+
x509:
173+
clientCAFile: "/var/lib/kubernetes/ca.pem"
174+
authorization:
175+
mode: Webhook
176+
clusterDomain: "cluster.local"
177+
clusterDNS:
178+
- "10.32.0.10"
179+
podCIDR: "${POD_CIDR}"
180+
resolvConf: "/run/systemd/resolve/resolv.conf"
181+
runtimeRequestTimeout: "15m"
182+
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
183+
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
184+
EOF
185+
```
186+
187+
```shell
188+
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
189+
[Unit]
190+
Description=Kubernetes Kubelet
191+
Documentation=https://github.com/kubernetes/kubernetes
192+
After=containerd.service
193+
Requires=containerd.service
194+
195+
[Service]
196+
ExecStart=/usr/local/bin/kubelet \\
197+
--config=/var/lib/kubelet/kubelet-config.yaml \\
198+
--container-runtime=remote \\
199+
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
200+
--image-pull-progress-deadline=2m \\
201+
--kubeconfig=/var/lib/kubelet/kubeconfig \\
202+
--network-plugin=cni \\
203+
--register-node=true \\
204+
--v=2
205+
Restart=on-failure
206+
RestartSec=5
207+
208+
[Install]
209+
WantedBy=multi-user.target
210+
EOF
211+
```
212+
213+
## Configure the Kubernetes Proxy
214+
215+
```shell
216+
sudo mv config/kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
217+
```
218+
219+
```shell
220+
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
221+
kind: KubeProxyConfiguration
222+
apiVersion: kubeproxy.config.k8s.io/v1alpha1
223+
clientConnection:
224+
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
225+
mode: "iptables"
226+
clusterCIDR: "10.200.0.0/16"
227+
EOF
228+
```
229+
230+
```shell
231+
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
232+
[Unit]
233+
Description=Kubernetes Kube Proxy
234+
Documentation=https://github.com/kubernetes/kubernetes
235+
236+
[Service]
237+
ExecStart=/usr/local/bin/kube-proxy \\
238+
--config=/var/lib/kube-proxy/kube-proxy-config.yaml \\
239+
--masquerade-all
240+
Restart=on-failure
241+
RestartSec=5
242+
243+
[Install]
244+
WantedBy=multi-user.target
245+
EOF
246+
```
247+
248+
## Enable and start services
249+
250+
```shell
251+
sudo systemctl daemon-reload
252+
sudo systemctl enable containerd kubelet kube-proxy
253+
sudo systemctl start containerd kubelet kube-proxy
254+
```
255+
256+
## Test worker nodes
257+
258+
On the worker nodes, test `containerd` is up and running using `crictl tool`.
259+
260+
```shell
261+
sudo crictl info
262+
```
263+
264+
On the master node, $HOME directory, run:
265+
266+
```shell
267+
$ kubectl get nodes --kubeconfig config/admin.kubeconfig
268+
NAME STATUS ROLES AGE VERSION
269+
p1 Ready <none> 34h v1.18.13-rc.0.15+6d211539692cee-dirty
270+
p2 Ready <none> 12m v1.18.13-rc.0.15+6d211539692cee-dirty
271+
```
272+
273+
The trailing `-dirty` is because I didn’t commit my changes to the kubelet source above, therefore the build script picked up on it and updated the version id.

0 commit comments

Comments
 (0)
Please sign in to comment.