Fix typos (#411)

Found via `codespell -L caf,etcp,alle,aks`
This commit is contained in:
Kian-Meng Ang
2023-08-25 04:02:53 +08:00
committed by GitHub
parent bf95e8f81e
commit 4b6718938c
29 changed files with 77 additions and 77 deletions

View File

@@ -119,7 +119,7 @@ with `--dry-run` flag which will not actually create it, but it will test it and
</b></details>
<details>
<summary>How to check how many containers run in signle Pod?</summary><br><b>
<summary>How to check how many containers run in single Pod?</summary><br><b>
`k get po POD_NAME` and see the number under "READY" column.
@@ -178,7 +178,7 @@ Go to that directory and remove the manifest/definition of the staic Pod (`rm <S
The container failed to run (due to different reasons) and Kubernetes tries to run the Pod again after some delay (= BackOff time).
Some reasons for it to fail:
- Misconfiguration - mispelling, non supported value, etc.
- Misconfiguration - misspelling, non supported value, etc.
- Resource not available - nodes are down, PV not mounted, etc.
Some ways to debug:
@@ -849,7 +849,7 @@ Running `kubectl get events` you can see which scheduler was used.
</b></details>
<details>
<summary>You want to run a new Pod and you would like it to be scheduled by a custom schduler. How to achieve it?</summary><br><b>
<summary>You want to run a new Pod and you would like it to be scheduled by a custom scheduler. How to achieve it?</summary><br><b>
Add the following to the spec of the Pod:

View File

@@ -1173,7 +1173,7 @@ Explanation as to who added them:
<details>
<summary>After creating a service that forwards incoming external traffic to the containerized application, how to make sure it works?</summary><br><b>
You can run `curl <SERIVCE IP>:<SERVICE PORT>` to examine the output.
You can run `curl <SERVICE IP>:<SERVICE PORT>` to examine the output.
</b></details>
<details>
@@ -1316,7 +1316,7 @@ To run two instances of the applicaation?
`kubectl scale deployment <DEPLOYMENT_NAME> --replicas=2`
You can speciy any other number, given that your application knows how to scale.
You can specify any other number, given that your application knows how to scale.
</b></details>
### ReplicaSets
@@ -1791,9 +1791,9 @@ False. When a namespace is deleted, the resources in that namespace are deleted
</b></details>
<details>
<summary>While namspaces do provide scope for resources, they are not isolating them</summary><br><b>
<summary>While namespaces do provide scope for resources, they are not isolating them</summary><br><b>
True. Try create two pods in two separate namspaces for example, and you'll see there is a connection between the two.
True. Try create two pods in two separate namespaces for example, and you'll see there is a connection between the two.
</b></details>
#### Namespaces - commands
@@ -1858,7 +1858,7 @@ If the namespace doesn't exist already: `k create ns dev`
<details>
<summary>What kube-node-lease contains?</summary><br><b>
It holds information on hearbeats of nodes. Each node gets an object which holds information about its availability.
It holds information on heartbeats of nodes. Each node gets an object which holds information about its availability.
</b></details>
<details>
@@ -2854,7 +2854,7 @@ Running `kubectl get events` you can see which scheduler was used.
</b></details>
<details>
<summary>You want to run a new Pod and you would like it to be scheduled by a custom schduler. How to achieve it?</summary><br><b>
<summary>You want to run a new Pod and you would like it to be scheduled by a custom scheduler. How to achieve it?</summary><br><b>
Add the following to the spec of the Pod:
@@ -2912,7 +2912,7 @@ Exit and save. The pod should be in Running state now.
`NoSchedule`: prevents from resources to be scheduled on a certain node
`PreferNoSchedule`: will prefer to shcedule resources on other nodes before resorting to scheduling the resource on the chosen node (on which the taint was applied)
`NoExecute`: Appling "NoSchedule" will not evict already running Pods (or other resources) from the node as opposed to "NoExecute" which will evict any already running resource from the Node
`NoExecute`: Applying "NoSchedule" will not evict already running Pods (or other resources) from the node as opposed to "NoExecute" which will evict any already running resource from the Node
</b></details>
### Resource Limits
@@ -3122,7 +3122,7 @@ Namespaces. See the following [namespaces question and answer](#namespaces-use-c
The container failed to run (due to different reasons) and Kubernetes tries to run the Pod again after some delay (= BackOff time).
Some reasons for it to fail:
- Misconfiguration - mispelling, non supported value, etc.
- Misconfiguration - misspelling, non supported value, etc.
- Resource not available - nodes are down, PV not mounted, etc.
Some ways to debug:

View File

@@ -9,4 +9,4 @@
## After you complete the exercise
* Why did the "RESTARTS" count raised? - `because we killed the process and Kubernetes identified the container isn't running proprely so it performed restart to the Pod`
* Why did the "RESTARTS" count raised? - `because we killed the process and Kubernetes identified the container isn't running properly so it performed restart to the Pod`

View File

@@ -51,7 +51,7 @@ kubectl label pod <POD_NAME> type-
5. List the Pods running. Are there more Pods running after removing the label? Why?
```
Yes, there is an additional Pod running because once the label (used as a matching selector) was removed, the Pod became independant meaning, it's not controlled by the ReplicaSet anymore and the ReplicaSet was missing replicas based on its definition so, it created a new Pod.
Yes, there is an additional Pod running because once the label (used as a matching selector) was removed, the Pod became independent meaning, it's not controlled by the ReplicaSet anymore and the ReplicaSet was missing replicas based on its definition so, it created a new Pod.
```
6. Verify the ReplicaSet indeed created a new Pod