If you ask yourself how would I remember writing all of that? no worries, you can simply run `kubectl run some_pod --image=redis -o yaml --dry-run=client > pod.yaml`. If you ask yourself "how am I supposed to remember this long command" time to change attitude ;)
<summary>Run the following command: <code>kubectl run ohno --image=sheris</code>. Did it work? why not? fix it without removing the Pod and using any image you want</summary><br><b>
Because there is no such image `sheris`. At least for now :)
To fix it, run `kubectl edit ohno` and modify the following line `- image: sheris` to `- image: redis` or any other image you prefer.
<summary>You try to run a Pod but it's in "Pending" state. What might be the reason?</summary><br><b>
One possible reason is that the scheduler which supposed to schedule Pods on nodes, is not running. To verify it, you can run `kubectl get po -A | grep scheduler` or check directly in `kube-system` namespace.
<summary>How to check how many replicasets defined in the current namespace?</summary><br><b>
`k get rs`
</b></details>
<details>
<summary>You have a replica set defined to run 3 Pods. You removed one of these 3 pods. What will happen next? how many Pods will there be?</summary><br><b>
There will still be 3 Pods running theoretically because the goal of the replica set is to ensure that. so if you delete one or more Pods, it will run additional Pods so there are always 3 Pods.
</b></details>
<details>
<summary>How to check which container image was used as part of replica set called "repli"?</summary><br><b>
`k describe rs repli | grep -i image`
</b></details>
<details>
<summary>How to check how many Pods are ready as part of a replica set called "repli"?</summary><br><b>
`k describe rs repli | grep -i "Pods Status"`
</b></details>
<details>
<summary>How to delete a replica set called "rori"?</summary><br><b>
`k delete rs rori`
</b></details>
<details>
<summary>How to modify a replica set called "rori" to use a different image?</summary><br><b>
`k edis rs rori`
</b></details>
<details>
<summary>Scale up a replica set called "rori" to run 5 Pods instead of 2</summary><br><b>
`k scale rs rori --replicas=5`
</b></details>
<details>
<summary>Scale down a replica set called "rori" to run 1 Pod instead of 5</summary><br><b>
`k scale rs rori --replicas=1`
</b></details>
### Troubleshooting ReplicaSets
<details>
<summary>Fix the following ReplicaSet definition
```yaml
apiVersion: apps/v1
kind: ReplicaCet
metadata:
name: redis
labels:
app: redis
tier: cache
spec:
selector:
matchLabels:
tier: cache
template:
metadata:
labels:
tier: cachy
spec:
containers:
- name: redis
image: redis
```
</summary><br><b>
kind should be ReplicaSet and not ReplicaCet :)
</b></details>
<details>
<summary>Fix the following ReplicaSet definition
```yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: redis
labels:
app: redis
tier: cache
spec:
selector:
matchLabels:
tier: cache
template:
metadata:
labels:
tier: cachy
spec:
containers:
- name: redis
image: redis
```
</summary><br><b>
The selector doesn't match the label (cache vs cachy). To solve it, fix cachy so it's cache instead.
</b></details>
## Deployments
<details>
<summary>How to list all the deployments in the current namespace?</summary><br><b>
`k get deploy`
</b></details>
<details>
<summary>How to check which image a certain Deployment is using?</summary><br><b>
<summary>Apply the label "hw=max" on one of the nodes in your cluster</summary><br><b>
`kubectl label nodes some-node hw=max`
</b></details>
<details>
<summary>reate and run a Pod called `some-pod` with the image `redis` and configure it to use the selector `hw=max`</summary><br><b>
```
kubectl run some-pod --image=redis --dry-run=client -o yaml > pod.yaml
vi pod.yaml
spec:
nodeSelector:
hw: max
kubectl apply -f pod.yaml
```
</b></details>
<details>
<summary>Explain why node selectors might be limited</summary><br><b>
Assume you would like to run your Pod on all the nodes with with either `hw` set to max or to min, instead of just max. This is not possible with nodeSelectors which are quite simplified and this is where you might want to consider `node affinity`.
<summary>Create a taint on one of the nodes in your cluster with key of "app" and value of "web" and effect of "NoSchedule". Verify it was applied</summary><br><b>
<summary>You applied a taint with <code>k taint node minikube app=web:NoSchedule</code> on the only node in your cluster and then executed <code>kubectl run some-pod --image=redis</code>. What will happen?</summary><br><b>
The Pod will remain in "Pending" status due to the only node in the cluster having a taint of "app=web".
</b></details>
<details>
<summary>You applied a taint with <code>k taint node minikube app=web:NoSchedule</code> on the only node in your cluster and then executed <code>kubectl run some-pod --image=redis</code> but the Pod is in pending state. How to fix it?</summary><br><b>
`kubectl edit po some-pod` and add the following
```
- effect: NoSchedule
key: app
operator: Equal
value: web
```
Exit and save. The pod should be in Running state now.
</b></details>
<details>
<summary>Remove an existing taint from one of the nodes in your cluster</summary><br><b>
<summary>Check if there are any limits on one of the pods in your cluster</summary><br><b>
`kubectl describe po <POD_NAME> | grep -i limits`
</b></details>
<details>
<summary>Run a pod called "yay" with the image "python" and resources request of 64Mi memory and 250m CPU</summary><br><b>
`kubectl run yay --image=python --dry-run=client -o yaml > pod.yaml`
`vi pod.yaml`
```
spec:
containers:
- image: python
imagePullPolicy: Always
name: yay
resources:
requests:
cpu: 250m
memory: 64Mi
```
`kubectl apply -f pod.yaml`
</b></details>
<details>
<summary>Run a pod called "yay2" with the image "python". Make sure it has resources request of 64Mi memory and 250m CPU and the limits are 128Mi memory and 500m CPU</summary><br><b>
`kubectl run yay2 --image=python --dry-run=client -o yaml > pod.yaml`