You've already forked devops-exercises
Rename exercises dir
Name it instead "topics" so it won't be strange if some topics included "exercises" directory.
This commit is contained in:
12
topics/kubernetes/solutions/killing_containers.md
Normal file
12
topics/kubernetes/solutions/killing_containers.md
Normal file
@@ -0,0 +1,12 @@
|
||||
## "Killing" Containers - Solution
|
||||
|
||||
1. Run Pod with a web service (e.g. httpd) - `kubectl run web --image registry.redhat.io/rhscl/httpd-24-rhel7`
|
||||
2. Verify the web service is running with the `ps` command - `kubectl exec web -- ps`
|
||||
3. Check how many restarts the pod has performed - `kubectl get po web`
|
||||
4. Kill the web service process -`kubectl exec web -- kill 1`
|
||||
5. Check how many restarts the pod has performed - `kubectl get po web`
|
||||
6. Verify again the web service is running - `kubectl exec web -- ps`
|
||||
|
||||
## After you complete the exercise
|
||||
|
||||
* Why did the "RESTARTS" count raised? - `because we killed the process and Kubernetes identified the container isn't running proprely so it performed restart to the Pod`
|
||||
6
topics/kubernetes/solutions/pods_01_solution.md
Normal file
6
topics/kubernetes/solutions/pods_01_solution.md
Normal file
@@ -0,0 +1,6 @@
|
||||
## Pods 01 - Solution
|
||||
|
||||
```
|
||||
kubectl run nginx --image=nginx --restart=Never
|
||||
kubectl get pods
|
||||
```
|
||||
62
topics/kubernetes/solutions/replicaset_01_solution.md
Normal file
62
topics/kubernetes/solutions/replicaset_01_solution.md
Normal file
@@ -0,0 +1,62 @@
|
||||
## ReplicaSet 01 - Solution
|
||||
|
||||
1. Create a ReplicaSet with 2 replicas. The app can be anything.
|
||||
|
||||
```
|
||||
cat >> rs.yaml <<EOL
|
||||
apiVersion: apps/v1
|
||||
kind: ReplicaSet
|
||||
metadata:
|
||||
name: web
|
||||
labels:
|
||||
app: somewebapp
|
||||
type: web
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
type: web
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
type: web
|
||||
spec:
|
||||
containers:
|
||||
- name: httpd
|
||||
image: registry.redhat.io/rhscl/httpd-24-rhel7
|
||||
EOL
|
||||
|
||||
kubectl apply -f rs.yaml
|
||||
```
|
||||
|
||||
2. Verify a ReplicaSet was created and there are 2 replicas
|
||||
|
||||
```
|
||||
kubectl get rs
|
||||
# OR a more specific way: kubectl get -f rs.yaml
|
||||
```
|
||||
|
||||
3. Delete one of the Pods the ReplicaSet has created
|
||||
|
||||
```
|
||||
kubectl delete po <POD_NAME>
|
||||
```
|
||||
|
||||
4. If you'll list all the Pods now, what will you see?
|
||||
|
||||
```
|
||||
The same number of Pods. Since we defined 2 replicas, the ReplicaSet will make sure to create another Pod that will replace the one you've deleted.
|
||||
```
|
||||
|
||||
5. Remove the ReplicaSet you've created
|
||||
|
||||
```
|
||||
kubectl delete -f rs.yaml
|
||||
```
|
||||
|
||||
6. Verify you've deleted the ReplicaSet
|
||||
|
||||
```
|
||||
kubectl get rs
|
||||
# OR a more specific way: kubectl get -f rs.yaml
|
||||
```
|
||||
62
topics/kubernetes/solutions/replicaset_02_solution.md
Normal file
62
topics/kubernetes/solutions/replicaset_02_solution.md
Normal file
@@ -0,0 +1,62 @@
|
||||
## ReplicaSet 02 - Solution
|
||||
|
||||
1. Create a ReplicaSet with 2 replicas. The app can be anything.
|
||||
|
||||
```
|
||||
cat >> rs.yaml <<EOL
|
||||
apiVersion: apps/v1
|
||||
kind: ReplicaSet
|
||||
metadata:
|
||||
name: web
|
||||
labels:
|
||||
app: somewebapp
|
||||
type: web
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
type: web
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
type: web
|
||||
spec:
|
||||
containers:
|
||||
- name: httpd
|
||||
image: registry.redhat.io/rhscl/httpd-24-rhel7
|
||||
EOL
|
||||
|
||||
kubectl apply -f rs.yaml
|
||||
```
|
||||
|
||||
2. Verify a ReplicaSet was created and there are 2 replicas
|
||||
|
||||
```
|
||||
kubectl get rs
|
||||
# OR a more specific way: kubectl get -f rs.yaml
|
||||
```
|
||||
|
||||
3. Remove the ReplicaSet but NOT the pods it created
|
||||
|
||||
```
|
||||
kubectl delete -f rs.yaml --cascade=false
|
||||
```
|
||||
|
||||
4. Verify you've deleted the ReplicaSet but the Pods are still running
|
||||
|
||||
```
|
||||
kubectl get rs # no replicas
|
||||
kubectl get po # Pods still running
|
||||
```
|
||||
|
||||
5. Create again the same ReplicaSet, without changing anything
|
||||
|
||||
```
|
||||
kubectl apply -f rs.yaml
|
||||
```
|
||||
|
||||
6. Verify that the ReplicaSet used the existing Pods and didn't create new Pods
|
||||
|
||||
```
|
||||
kubectl describe rs web # You should see there are no new events and if you list the pods with 'kubectl get po -f rs.yaml` you'll see they have the same names
|
||||
```
|
||||
61
topics/kubernetes/solutions/replicaset_03_solution.md
Normal file
61
topics/kubernetes/solutions/replicaset_03_solution.md
Normal file
@@ -0,0 +1,61 @@
|
||||
## ReplicaSet 03 - Solution
|
||||
|
||||
1. Create a ReplicaSet with 2 replicas. Make sure the label used for the selector and in the Pods is "type=web"
|
||||
|
||||
```
|
||||
cat >> rs.yaml <<EOL
|
||||
apiVersion: apps/v1
|
||||
kind: ReplicaSet
|
||||
metadata:
|
||||
name: web
|
||||
labels:
|
||||
app: somewebapp
|
||||
type: web
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
type: web
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
type: web
|
||||
spec:
|
||||
containers:
|
||||
- name: httpd
|
||||
image: registry.redhat.io/rhscl/httpd-24-rhel7
|
||||
EOL
|
||||
|
||||
kubectl apply -f rs.yaml
|
||||
```
|
||||
|
||||
2. Verify a ReplicaSet was created and there are 2 replicas
|
||||
|
||||
```
|
||||
kubectl get rs
|
||||
# OR a more specific way: kubectl get -f rs.yaml
|
||||
```
|
||||
|
||||
3. List the Pods running and save the output somewhere
|
||||
|
||||
```
|
||||
kubectl get po > running_pods.txt
|
||||
```
|
||||
|
||||
4. Remove the label (type=web) from one of the Pods created by the ReplicaSet
|
||||
|
||||
```
|
||||
kubectl label pod <POD_NAME> type-
|
||||
```
|
||||
|
||||
5. List the Pods running. Are there more Pods running after removing the label? Why?
|
||||
|
||||
```
|
||||
Yes, there is an additional Pod running because once the label (used as a matching selector) was removed, the Pod became independant meaning, it's not controlled by the ReplicaSet anymore and the ReplicaSet was missing replicas based on its definition so, it created a new Pod.
|
||||
```
|
||||
|
||||
6. Verify the ReplicaSet indeed created a new Pod
|
||||
|
||||
```
|
||||
kubectl describe rs web
|
||||
```
|
||||
19
topics/kubernetes/solutions/services_01_solution.md
Normal file
19
topics/kubernetes/solutions/services_01_solution.md
Normal file
@@ -0,0 +1,19 @@
|
||||
## Services 01 - Solution
|
||||
|
||||
```
|
||||
kubectl run nginx --image=nginx --restart=Never --port=80 --labels="app=dev-nginx"
|
||||
|
||||
cat << EOF > nginx-service.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx-service
|
||||
spec:
|
||||
selector:
|
||||
app: dev-nginx
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9372
|
||||
EOF
|
||||
```
|
||||
Reference in New Issue
Block a user