# CKA (Certified Kubernetes Administrator)
- [CKA (Certified Kubernetes Administrator)](#cka-certified-kubernetes-administrator)
- [Setup](#setup)
- [Kubernetes Nodes](#kubernetes-nodes)
- [Pods](#pods)
- [Troubleshooting Pods](#troubleshooting-pods)
- [Namespaces](#namespaces)
## Setup
* Set up Kubernetes cluster. Use on of the following
1. Minikube for local free & simple cluster
2. Managed Cluster (EKS, GKE, AKS)
* Set aliases
```
alias k=kubectl
alias kd=kubectl delete
alias kds=kubectl describe
alias ke=kubectl edit
alias kr=kubectl run
alias kg=kubectl get
```
## Kubernetes Nodes
Run a command to view all nodes of the cluster
`kubectl get nodes`
Note: create an alias (`alias k=kubectl`) and get used to `k get no`
## Pods
Run a command to view all the pods in current namespace
Note: create an alias (`alias k=kubectl`) and get used to `k get po`
Run a pod called "nginx-test" using the "nginx" image
`k run nginx-test --image=nginx`
Assuming you have a Pod called "nginx-test", how to remove it?
`k delete nginx-test`
In what namespace the etcd
pod is running? list the pods in that namespace
`k get po -n kube-system`
List pods from all namespaces
`k get po --all-namespaces`
Write a YAML of a Pod with two containers and use the YAML file to create the Pod (use whatever images you prefer)
```
cat > pod.yaml <
Create a YAML of a Pod without actually running the Pod with the kubectl command (use whatever image you prefer)
`k run some-pod -o yaml --image nginx-unprivileged --dry-run=client > pod.yaml`
How to test a manifest is valid?
with `--dry-run` flag which will not actually create it, but it will test it and you can find this way any syntax issues.
`kubectl create -f YAML_FILE --dry-run`
### Troubleshooting Pods
You try to run a Pod but see the status "CrashLoopBackOff". What does it means? How to identify the issue?
The container failed to run (due to different reasons) and Kubernetes tries to run the Pod again after some delay (= BackOff time).
Some reasons for it to fail:
- Misconfiguration - mispelling, non supported value, etc.
- Resource not available - nodes are down, PV not mounted, etc.
Some ways to debug:
1. `kubectl describe pod POD_NAME`
1. Focus on `State` (which should be Waiting, CrashLoopBackOff) and `Last State` which should tell what happened before (as in why it failed)
2. Run `kubectl logs mypod`
1. This should provide an accurate output of
2. For specific container, you can add `-c CONTAINER_NAME`
3. If you still have no idea why it failed, try `kubectl get events`
What the error ImagePullBackOff
means?
Most likely you didn't write correctly the name of the image you try to pull and run
You can confirm with `kubectl describe po POD_NAME`
## Namespaces
List all the namespaces
`k get ns`