Skip to content

Commit e78e87d

Browse files
authored
Merge pull request cncamp#16 from SignorMercurio/master
Docs improvement since module7
2 parents 4afe0ba + 32cede2 commit e78e87d

File tree

37 files changed

+720
-417
lines changed

37 files changed

+720
-417
lines changed

module10/argocd/readme.MD

Lines changed: 33 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1,28 +1,39 @@
1-
### install
2-
```
1+
### Install
2+
3+
```sh
34
kubectl create namespace argocd
45
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
56
```
6-
### access argocd
7-
### update argocd-server service to NodePort
8-
#### access argocd console
9-
```
10-
user:admin
11-
password: k get secret -n argocd argocd-initial-admin-secret -oyaml
12-
```
13-
### manage repositories->connect repo using https
14-
```
7+
8+
### Access argocd
9+
10+
### Update argocd-server service to NodePort
11+
12+
#### Access argocd console
13+
14+
- user: `admin`
15+
- password:
16+
```sh
17+
k get secret -n argocd argocd-initial-admin-secret -oyaml
18+
```
19+
20+
### Manage repositories->connect repo using https
21+
1522
https://github.com/cncamp/test.git
16-
```
17-
### create application
18-
```
19-
sync policy: manual
20-
path: .
21-
```
22-
### create sync
23-
### scale the deploy by cmd
24-
```
23+
24+
### Create application
25+
26+
- sync policy: `manual`
27+
- path: `.`
28+
29+
### Create sync
30+
31+
### Scale the deploy by cmd
32+
33+
```sh
2534
k scale deployment httpserver --replicas=2
2635
```
27-
### check appstatus and sync again
28-
### change the sync policy to auto and see
36+
37+
### Check appstatus and sync again
38+
39+
### Change the sync policy to `auto` and check

module10/harbor/harbor.MD

Lines changed: 47 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -1,62 +1,84 @@
1-
### install download harbor helm chart
2-
```
1+
### Download harbor helm chart
2+
3+
```sh
34
helm repo add harbor https://helm.goharbor.io
45
helm fetch harbor/harbor --untar
56
kubectl create ns harbor
67
```
7-
### update values.yaml
8+
9+
### Update values.yaml
10+
11+
```sh
12+
vi .harbor/values.yaml
813
```
9-
vi ./harbor/values.yaml and change
1014

15+
And change:
1116

17+
```yaml
1218
expose:
1319
type: nodePort
1420
tls:
15-
commonName: "core.harbor.domain"
21+
commonName: 'core.harbor.domain'
1622

1723
persistence: false
1824
```
19-
### install helm chart
20-
```
25+
26+
### Install helm chart
27+
28+
```sh
2129
helm install harbor ./harbor -n harbor
2230
```
23-
### wait for all pod being ready and access harbor portal
31+
32+
### Wait for all pod being ready and access harbor portal
33+
34+
http://192.168.34.2:30002
35+
2436
```
25-
192.168.34.2:30002
2637
admin/Harbor12345
2738
```
28-
### download repository certs from
29-
```
39+
40+
### Download repository certs from
41+
42+
```sh
3043
https://192.168.34.2:30003/harbor/projects/1/repositories
3144
```
32-
### copy the downloaded ca.crt to vm docker certs configuration folder
33-
```
45+
46+
### Copy the downloaded ca.crt to vm docker certs configuration folder
47+
48+
```sh
3449
mkdir /etc/docker/certs.d/core.harbor.domain
3550
copy the ca.crt to this folder
3651
systemctl restart docker
3752
```
38-
### edit /etc/hosts to map core.harbor.domain to harbor svc clusterip
39-
```
53+
54+
### Edit /etc/hosts to map core.harbor.domain to harbor svc clusterip
55+
56+
```sh
4057
10.104.231.99 core.harbor.domain
4158
```
42-
### docker login
43-
```
59+
60+
### Docker login
61+
62+
```sh
4463
docker login -u admin -p Harbor12345 core.harbor.domain
4564
```
46-
### docker tag a image to core.harbor.domain and push it and you will see it in harbor portal
4765

48-
### check repositories and blobs
49-
```
66+
### Docker tag a image to core.harbor.domain and push it and you will see it in harbor portal
67+
68+
### Check repositories and blobs
69+
70+
```sh
5071
kubectl exec -it harbor-registry-7d686859d7-xs5nv -n harbor bash
5172
ls -la /storage/docker/registry/v2/repositories/
5273
ls -la /storage/docker/registry/v2/blobs
5374
```
54-
### db operator
55-
```
56-
kubectl exec -it harbor-database-0 -n harbor bash
75+
76+
### Database operator
77+
78+
```sh
79+
kubectl exec -it harbor-database-0 -n harbor -- bash
5780
psql -U postgres -d postgres -h 127.0.0.1 -p 5432
5881
\c registry
5982
select * from harbor_user;
6083
\dt
61-
62-
```
84+
```
Lines changed: 15 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,27 @@
1-
### edit prom configmap
2-
```
1+
### Edit prom configmap
2+
3+
```sh
34
k edit configmap loki-prometheus-server
45
```
5-
### add the following alert to alerting_rules.yml
6-
```
7-
groups:
8-
- name: example
9-
rules:
6+
7+
### Add the following alert to alerting_rules.yml
8+
9+
```yaml
10+
groups:
11+
- name: example
12+
rules:
1013
- alert: ContainerKilled
1114
expr: time() - container_last_seen > 60
1215
for: 5m
1316
labels:
1417
severity: warning
1518
annotations:
16-
summary: "Container killed (instance {{ $labels.instance }})"
19+
summary: 'Container killed (instance {{ $labels.instance }})'
1720
description: "A container has disappeared\n VALUE = {{ $value }}\n LABELS: {{ $labels }}"
1821
```
19-
### reload
20-
```
22+
23+
### Reload
24+
25+
```sh
2126
curl -XPOST 192.168.166.149:9090/-/reload
2227
```

module10/loki-stack/readme.MD

Lines changed: 38 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,47 +1,64 @@
1-
### add grafana repo
1+
### Add grafana repo
2+
3+
```sh
24
helm repo add grafana https://grafana.github.io/helm-charts
3-
### install loki-stack
45
```
6+
7+
### Install loki-stack
8+
9+
```sh
510
helm upgrade --install loki grafana/loki-stack --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false
611
```
712

8-
### if you get the following error, that means your k8s version is too new to install
13+
### If you get the following error, that means your k8s version is too new to install
14+
915
```
1016
Error: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1", unable to recognize "": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1", unable to recognize "": no matches for kind "Role" in version "rbac.authorization.k8s.io/v1beta1", unable to recognize "": no matches for kind "RoleBinding" in version "rbac.authorization.k8s.io/v1beta1"]
1117
```
1218

13-
### download loki-stack
14-
```
19+
### Download loki-stack
20+
21+
```sh
1522
helm pull grafana/loki-stack
1623
tar -xvf loki-stack-*.tgz
17-
```
18-
```
1924
cd loki-stack
2025
```
21-
### replace all `rbac.authorization.k8s.io/v1beta1` with `rbac.authorization.k8s.io/v1` by
22-
```
26+
27+
### Replace all `rbac.authorization.k8s.io/v1beta1` with `rbac.authorization.k8s.io/v1` by
28+
```sh
2329
grep -rl "rbac.authorization.k8s.io/v1beta1" . | xargs sed -i 's/rbac.authorization.k8s.io\/v1beta1/rbac.authorization.k8s.io\/v1/g'
2430
```
25-
### install loki locally
26-
```
31+
32+
### Install loki locally
33+
34+
```sh
2735
helm upgrade --install loki ./loki-stack --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false
2836
```
29-
### change the grafana service to NodePort type and access it
30-
```
37+
38+
### Change the grafana service to NodePort type and access it
39+
40+
```sh
3141
kubectl edit svc loki-grafana -oyaml -n default
32-
change ClusterIP type to NodePort
33-
```
34-
login password is in secret `loki-grafana`
3542
```
43+
44+
And change ClusterIP type to NodePort.
45+
46+
Login password is in secret `loki-grafana`
47+
48+
```sh
3649
kubectl get secret loki-grafana -oyaml -n default
3750
```
38-
find admin-password: xxx
39-
```
51+
52+
Find admin-password: `xxx`
53+
54+
```sh
4055
echo 'xxx' | base64 -d
4156
```
42-
then you will get grafana login password, the login username is 'admin' on default.
4357

44-
note: the xxx is value of key admin-password in your yaml.
58+
Then you will get grafana login password, the login username is 'admin' on default.
59+
60+
> Note: `xxx` is the value of key `admin-password` in your yaml.
4561
46-
###
62+
### Change the grafana service to NodePort type and access it
4763

64+
Login password is in secret `loki-grafana`

module10/prometheus/prometheus.MD

Lines changed: 0 additions & 12 deletions
This file was deleted.

module10/prometheus/readme.MD

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,23 @@
1-
### sync VM time with host
2-
```
1+
### Sync VM time with host
2+
3+
```sh
34
shutdown vm
45
cd ~/"VirtualBox VMs"
56
VBoxManage list vms
67
"localkube" {014a8874-1cbe-43ec-a47c-ce7248bce13e}
78
vboxmanage setextradata "localkube" "VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" "1"
89
vboxmanage setextradata "crane" "VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" "1"
910
```
10-
### start VM and then you can get prometheus data
1111

12-
### grafana dashboard
13-
```
12+
### Start VM and then you can get prometheus data
13+
14+
### Grafana dashboard
15+
16+
```sh
1417
cluster health
1518
6417
1619
pod dashboard
1720
9729
1821
Istio Mesh
1922
7639
20-
```
23+
```

0 commit comments

Comments
 (0)