Add k8s questions

Updated CKA page as well.
This commit is contained in:
abregman 2022-10-21 12:02:30 +03:00
parent ad66a50f3a
commit 422a48a34c
5 changed files with 551 additions and 76 deletions

View File

@ -2,7 +2,7 @@
:information_source:  This repo contains questions and exercises on various technical topics, sometimes related to DevOps and SRE :information_source:  This repo contains questions and exercises on various technical topics, sometimes related to DevOps and SRE
:bar_chart:  There are currently **2354** exercises and questions :bar_chart:  There are currently **2406** exercises and questions
:books:  To learn more about DevOps and SRE, check the resources in [devops-resources](https://github.com/bregman-arie/devops-resources) repository :books:  To learn more about DevOps and SRE, check the resources in [devops-resources](https://github.com/bregman-arie/devops-resources) repository

View File

@ -26,6 +26,20 @@ Use the CLI to access advanced tools locally.
Get flaky test detection with test insights." Get flaky test detection with test insights."
</b></details> </b></details>
<details>
<summary>Explain the following:
* Pipeline
* Workflow
* Jobs
* Steps
</summary><br><b>
* Pipeline: the entire CI/CD configuration (.circleci/config.yaml)
* Workflow: primarily used when there is more than one job in the configuration to orchestrate the workflows
* Jobs: One or more steps to execute as part of the CI/CD process
* Steps: The actual commands to execute
</b></details>
<details> <details>
<summary>What is an Orb?</summary><br><b> <summary>What is an Orb?</summary><br><b>
@ -42,3 +56,30 @@ They can come from the public registry or defined privately as part of an organi
`.circleci/config.yml` `.circleci/config.yml`
</b></details> </b></details>
<details>
<summary>Explain the following configuration file
```
version: 2.1
jobs:
say-hello:
docker:
- image: cimg/base:stable
steps:
- checkout
- run:
name: "Say hello"
command: "echo Hello, World!"
workflows:
say-hello-workflow:
jobs:
- say-hello
```
</summary><br><b>
This configuration file will set up one job that will checkout the code of the project will run the command `echo Hello, World!`.
It will run in a container using the image `cimg/base:stable`.
</b></details>

View File

@ -16,6 +16,7 @@
- [Labels and Selectors](#labels-and-selectors) - [Labels and Selectors](#labels-and-selectors)
- [Node Selector](#node-selector) - [Node Selector](#node-selector)
- [Taints](#taints) - [Taints](#taints)
- [Resources Limits](#resources-limits)
## Setup ## Setup
@ -255,6 +256,12 @@ Note: create an alias (`alias k=kubectl`) and get used to `k get no`
`k get nodes -o json > some_nodes.json` `k get nodes -o json > some_nodes.json`
</b></details> </b></details>
<details>
<summary>Check what labels one of your nodes in the cluster has</summary><br><b>
`k get no minikube --show-labels`
</b></details>
## Services ## Services
<details> <details>
@ -450,6 +457,42 @@ The selector doesn't match the label (cache vs cachy). To solve it, fix cachy so
</b></details> </b></details>
<details>
<summary>Create a deployment called "pluck" using the image "redis" and make sure it runs 5 replicas</summary><br><b>
`kubectl create deployment pluck --image=redis`
`kubectl scale deployment pluck --replicas=5`
</b></details>
<details>
<summary>Create a deployment with the following properties:
* called "blufer"
* using the image "python"
* runs 3 replicas
* all pods will be placed on a node that has the label "blufer"
</summary><br><b>
`kubectl create deployment blufer --image=python --replicas=3 -o yaml --dry-run=client > deployment.yaml`
Add the following section (`vi deployment.yaml`):
```
spec:
affinity:
nodeAffinity:
requiredDuringSchedlingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: blufer
operator: Exists
```
`kubectl apply -f deployment.yaml`
</b></details>
### Troubleshooting Deployments ### Troubleshooting Deployments
<details> <details>
@ -672,3 +715,58 @@ Exit and save. The pod should be in Running state now.
`k taint node minikube app=web:NoSchedule-` `k taint node minikube app=web:NoSchedule-`
</b></details> </b></details>
## Resources Limits
<details>
<summary>Check if there are any limits on one of the pods in your cluster</summary><br><b>
`kubectl describe po <POD_NAME> | grep -i limits`
</b></details>
<details>
<summary>Run a pod called "yay" with the image "python" and resources request of 64Mi memory and 250m CPU</summary><br><b>
`kubectl run yay --image=python --dry-run=client -o yaml > pod.yaml`
`vi pod.yaml`
```
spec:
containers:
- image: python
imagePullPolicy: Always
name: yay
resources:
requests:
cpu: 250m
memory: 64Mi
```
`kubectl apply -f pod.yaml`
</b></details>
<details>
<summary>Run a pod called "yay2" with the image "python". Make sure it has resources request of 64Mi memory and 250m CPU and the limits are 128Mi memory and 500m CPU</summary><br><b>
`kubectl run yay2 --image=python --dry-run=client -o yaml > pod.yaml`
`vi pod.yaml`
```
spec:
containers:
- image: python
imagePullPolicy: Always
name: yay2
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 250m
memory: 64Mi
```
`kubectl apply -f pod.yaml`
</b></details>

View File

@ -20,22 +20,32 @@ What's your goal?
- [Kubernetes Questions](#kubernetes-questions) - [Kubernetes Questions](#kubernetes-questions)
- [Kubernetes 101](#kubernetes-101) - [Kubernetes 101](#kubernetes-101)
- [Cluster and Architecture](#cluster-and-architecture) - [Cluster and Architecture](#cluster-and-architecture)
- [Kubelet](#kubelet)
- [Nodes Commands](#nodes-commands)
- [Pods](#pods-1) - [Pods](#pods-1)
- [Static Pods](#static-pods)
- [Pods - Commands](#pods---commands) - [Pods - Commands](#pods---commands)
- [Pods - Troubleshooting and Debugging](#pods---troubleshooting-and-debugging) - [Pods - Troubleshooting and Debugging](#pods---troubleshooting-and-debugging)
- [Labels and Selectors](#labels-and-selectors-1)
- [Deployments](#deployments) - [Deployments](#deployments)
- [Deployments Commands](#deployments-commands)
- [Services](#services) - [Services](#services)
- [Ingress](#ingress) - [Ingress](#ingress)
- [ReplicaSets](#replicasets) - [ReplicaSets](#replicasets)
- [DaemonSet](#daemonset)
- [DaemonSet - Commands](#daemonset---commands)
- [StatefulSet](#statefulset) - [StatefulSet](#statefulset)
- [Storage](#storage) - [Storage](#storage)
- [Volumes](#volumes)
- [Networking](#networking) - [Networking](#networking)
- [Network Policies](#network-policies) - [Network Policies](#network-policies)
- [etcd](#etcd) - [etcd](#etcd)
- [Namespaces](#namespaces) - [Namespaces](#namespaces)
- [Namespaces - commands](#namespaces---commands)
- [Resources Quota](#resources-quota)
- [Operators](#operators) - [Operators](#operators)
- [Secrets](#secrets) - [Secrets](#secrets)
- [Volumes](#volumes) - [Volumes](#volumes-1)
- [Access Control](#access-control) - [Access Control](#access-control)
- [Patterns](#patterns) - [Patterns](#patterns)
- [CronJob](#cronjob) - [CronJob](#cronjob)
@ -49,7 +59,9 @@ What's your goal?
- [Controllers](#controllers) - [Controllers](#controllers)
- [Scheduler](#scheduler-1) - [Scheduler](#scheduler-1)
- [Node Affinity](#node-affinity) - [Node Affinity](#node-affinity)
- [Taints](#taints) - [Taints](#taints)
- [Resource Limits](#resource-limits)
- [Resources Limits - Commands](#resources-limits---commands)
- [Scenarios](#scenarios) - [Scenarios](#scenarios)
## Kubernetes Exercises ## Kubernetes Exercises
@ -170,6 +182,7 @@ Becaused container is not a Kubernetes object. The smallest object unit in Kuber
- Always make sure Kubernetes YAML files are valid. Applying automated checks and pipelines is recommended. - Always make sure Kubernetes YAML files are valid. Applying automated checks and pipelines is recommended.
- Always specify requests and limits to prevent situation where containers are using the entire cluster memory which may lead to OOM issue - Always specify requests and limits to prevent situation where containers are using the entire cluster memory which may lead to OOM issue
- Specify labels to logically group Pods, Deployments, etc. Use labels to identify the type of the application for example, among other things
</b></details> </b></details>
### Cluster and Architecture ### Cluster and Architecture
@ -201,13 +214,6 @@ The master coordinates all the workflows in the cluster:
</b></details> </b></details>
<details>
<summary>Which command will list the nodes of the cluster?</summary><br><b>
`kubectl get nodes`
</b></details>
<details> <details>
<summary>Describe shortly and in high-level, what happens when you run <code>kubectl get nodes</code></summary><br><b> <summary>Describe shortly and in high-level, what happens when you run <code>kubectl get nodes</code></summary><br><b>
@ -285,6 +291,35 @@ Apply requests and limits, especially on third party applications (where the unc
Outputs the status of each of the control plane components. Outputs the status of each of the control plane components.
</b></details> </b></details>
#### Kubelet
<details>
<summary>What happens to running pods if if you stop Kubelet on the worker nodes?</summary><br><b>
</b></details>
#### Nodes Commands
<details>
<summary>Run a command to view all nodes of the cluster</summary><br><b>
`kubectl get nodes`
Note: You might want to create an alias (`alias k=kubectl`) and get used to `k get no`
</b></details>
<details>
<summary>Create a list of all nodes in JSON format and store it in a file called "some_nodes.json"</summary><br><b>
`k get nodes -o json > some_nodes.json`
</b></details>
<details>
<summary>Check what labels one of your nodes in the cluster has</summary><br><b>
`k get no minikube --show-labels`
</b></details>
### Pods ### Pods
<details> <details>
@ -361,16 +396,6 @@ False. A single Pod can run on a single node.
<summary>You run a pod and you see the status <code>ContainerCreating</code></summary><br><b> <summary>You run a pod and you see the status <code>ContainerCreating</code></summary><br><b>
</b></details> </b></details>
<details>
<summary>What are "Static Pods"?</summary><br><b>
* Managed directly by Kubelet on specific node
* API server is not observing static Pods
* For each static Pod there is a mirror Pod on kubernetes API server but it can't be managed from there
Read more about it [here](https://kubernetes.io/docs/tasks/configure-pod-container/static-pod)
</b></details>
<details> <details>
<summary>True or False? A volume defined in Pod can be accessed by all the containers of that Pod</summary><br><b> <summary>True or False? A volume defined in Pod can be accessed by all the containers of that Pod</summary><br><b>
@ -510,6 +535,54 @@ False. Each Pod gets an IP address but an internal one and not publicly accessib
To make a Pod externally accessible, we need to use an object called Service in Kubernetes. To make a Pod externally accessible, we need to use an object called Service in Kubernetes.
</b></details> </b></details>
#### Static Pods
<details>
<summary>What are Static Pods?</summary><br><b>
[Kubernetes.io](https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/): "Static Pods are managed directly by the kubelet daemon on a specific node, without the API server observing them. Unlike Pods that are managed by the control plane (for example, a Deployment); instead, the kubelet watches each static Pod (and restarts it if it fails)."
</b></details>
<details>
<summary>True or False? The same as there are "Static Pods" there are other static resources like "deployments" and "replicasets"</summary><br><b>
False.
</b></details>
<details>
<summary>What are some use cases for using Static Pods?</summary><br><b>
One clear use case is running Control Plane Pods - running Pods such as kube-apiserver, scheduler, etc. These should run and operate regardless of whether some components of the cluster work or not and they should run on specific nodes of the cluster.
</b></details>
<details>
<summary>How to identify which Pods are Static Pods?</summary><br><b>
The suffix of the Pods is the same as the name of the nodes on which they are running
TODO: check if it's always the case.
</b></details>
<details>
<summary>Which of the following is not a static pod?:
* kube-scheduler
* kube-proxy
* kube-apiserver
</summary><br><b>
kube-proxy - it's a DaemonSet (since it has to be presented on every node in the cluster). There is no one specific node on which it has to run.
</b></details>
<details>
<summary>Where static Pods manifests are located?</summary><br><b>
Most of the time it's in /etc/kubernetes/manifests but you can verify with `grep -i static /var/lib/kubelet/config.yaml` to locate the value of `statisPodsPath`.
It might be that your config is in different path. To verify run `ps -ef | grep kubelet` and see what is the value of --config argument of the process `/usr/bin/kubelet`
The key itself for defining the path of static Pods is `staticPodPath`. So if your config is in `/var/lib/kubelet/config.yaml` you can run `grep staticPodPath /var/lib/kubelet/config.yaml`.
</b></details>
#### Pods - Commands #### Pods - Commands
<details> <details>
@ -552,6 +625,55 @@ To count them: `k get po -l env=prod --no-headers | wc -l`
One possible reason is that the scheduler which supposed to schedule Pods on nodes, is not running. To verify it, you can run `kubectl get po -A | grep scheduler` or check directly in `kube-system` namespace. One possible reason is that the scheduler which supposed to schedule Pods on nodes, is not running. To verify it, you can run `kubectl get po -A | grep scheduler` or check directly in `kube-system` namespace.
</b></details> </b></details>
<details>
<summary>What <code>kubectl logs [pod-name]</code> command does?</summary><br><b>
Prints the logs for a container in a pod.
</b></details>
<details>
<summary>What <code>kubectl describe pod [pod name] does?</code> command does?</summary><br><b>
Show details of a specific resource or group of resources.
</b></details>
### Labels and Selectors
<details>
<summary>Explain Labels</summary><br><b>
[Kubernetes.io](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/): "Labels are key/value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key/value labels defined. Each Key must be unique for a given object."
</b></details>
<details>
<summary>Explain selectors</summary><br><b>
[Kubernetes.io](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors): "Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).
Via a label selector, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.
The API currently supports two types of selectors: equality-based and set-based. A label selector can be made of multiple requirements which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical AND (&&) operator."
</b></details>
<details>
<summary>Provide some actual examples of how labels are used</summary><br><b>
* Can be used by the scheduler to place certain Pods (with certain labels) on specific nodes
* Used by replicasets to track pods which have to be scaled
</b></details>
<details>
<summary>What are Annotations?</summary><br><b>
[Kubernetes.io](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/): "You can use Kubernetes annotations to attach arbitrary non-identifying metadata to objects. Clients such as tools and libraries can retrieve this metadata."
</b></details>
<details>
<summary>How annotations different from labels?</summary><br><b>
[Kuberenets.io](Labels can be used to select objects and to find collections of objects that satisfy certain conditions. In contrast, annotations are not used to identify and select objects. The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.): "Labels can be used to select objects and to find collections of objects that satisfy certain conditions. In contrast, annotations are not used to identify and select objects. The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels."
</b></details>
### Deployments ### Deployments
<details> <details>
@ -641,20 +763,6 @@ Using a Service.
<summary>Can you use a Deployment for stateful applications?</summary><br><b> <summary>Can you use a Deployment for stateful applications?</summary><br><b>
</b></details> </b></details>
<details>
<summary>Create a file definition/manifest of a deployment called "dep", with 3 replicas that uses the image 'redis'</summary><br><b>
`k create deploy dep -o yaml --image=redis --dry-run=client --replicas 3 > deployment.yaml `
</b></details>
<details>
<summary>Delete the deployment `depdep`</summary><br><b>
`k delete deploy depdep`
</b></details>
<details> <details>
<summary>Fix the following deployment manifest <summary>Fix the following deployment manifest
@ -723,6 +831,57 @@ status: {}
The selector doesn't match the label (dep vs depdep). To solve it, fix depdep so it's dep instead. The selector doesn't match the label (dep vs depdep). To solve it, fix depdep so it's dep instead.
</b></details> </b></details>
#### Deployments Commands
<details>
<summary>Create a file definition/manifest of a deployment called "dep", with 3 replicas that uses the image 'redis'</summary><br><b>
`k create deploy dep -o yaml --image=redis --dry-run=client --replicas 3 > deployment.yaml `
</b></details>
<details>
<summary>Delete the deployment `depdep`</summary><br><b>
`k delete deploy depdep`
</b></details>
<details>
<summary>Create a deployment called "pluck" using the image "redis" and make sure it runs 5 replicas</summary><br><b>
`kubectl create deployment pluck --image=redis`
`kubectl scale deployment pluck --replicas=5`
</b></details>
<details>
<summary>Create a deployment with the following properties:
* called "blufer"
* using the image "python"
* runs 3 replicas
* all pods will be placed on a node that has the label "blufer"
</summary><br><b>
`kubectl create deployment blufer --image=python --replicas=3 -o yaml --dry-run=client > deployment.yaml`
Add the following section (`vi deployment.yaml`):
```
spec:
affinity:
nodeAffinity:
requiredDuringSchedlingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: blufer
operator: Exists
```
`kubectl apply -f deployment.yaml`
</b></details>
### Services ### Services
<details> <details>
@ -1259,13 +1418,6 @@ Few notes:
- type can be different and doesn't has to be specifically "NodePort" - type can be different and doesn't has to be specifically "NodePort"
</b></details> </b></details>
<details>
<summary>What's the difference between a ReplicaSet and DaemonSet?</summary><br><b>
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time.
A DaemonSet ensures that all Nodes run a copy of a Pod.
</b></details>
<details> <details>
<summary>Fix the following ReplicaSet definition <summary>Fix the following ReplicaSet definition
@ -1360,6 +1512,45 @@ The selector doesn't match the label (cache vs cachy). To solve it, fix cachy so
`k scale rs rori --replicas=1` `k scale rs rori --replicas=1`
</b></details> </b></details>
### DaemonSet
<details>
<summary>What's a DaemonSet?</summary><br><b>
[Kubernetes.io](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset): "A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created."
</b></details>
<details>
<summary>What's the difference between a ReplicaSet and DaemonSet?</summary><br><b>
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time.
A DaemonSet ensures that all Nodes run a copy of a Pod.
</b></details>
<details>
<summary>What are some use cases for using a DaemonSet?</summary><br><b>
* Monitoring: You would like to perform monitoring on every node part of cluster. For example datadog pod runs on every node using a daemonset
* Logging: You would like to having logging set up on every node part of your cluster
* Networking: there is networking component you need on every node for all nodes to communicate between them
</b></details>
<details>
<summary>How DaemonSet works?</summary><br><b>
Historically, up 1.12, it was done with NodeName attribute.
Starting 1.12, it's achieved with regular scheduler and node affinity.
</b></details>
#### DaemonSet - Commands
<details>
<summary>How to list all daemonsets in the current namespace?</summary><br><b>
`kubectl get ds`
</b></details>
### StatefulSet ### StatefulSet
<details> <details>
@ -1370,23 +1561,47 @@ StatefulSet is the workload API object used to manage stateful applications. Man
### Storage ### Storage
#### Volumes
<details> <details>
<summary>What is a volume in regards to Kubernetes?</summary><br><b> <summary>What is a volume in regards to Kubernetes?</summary><br><b>
A directory accessible by the containers inside a certain Pod. The mechanism responsible for creating the directory and managing it, ... is mainly depends on the volume type. A directory accessible by the containers inside a certain Pod and containers. The mechanism responsible for creating the directory, managing it, ... mainly depends on the volume type.
</b></details> </b></details>
<details> <details>
<summary>Which problems volumes in Kubernetes solve?</summary><br><b> <summary>What volume types are you familiar with?</summary><br><b>
* emptyDir: created when a Pod assigned to a node and ceases to exist when the Pod is no longer running on that node
* hostPath: mounts a path from the host itself. Usually not used due to security risks but has multiple use-cases where it's needed like access to some internal host paths (`/sys`, `/var/lib`, etc.)
</b></details>
<details>
<summary>Which problems, volumes in Kubernetes solve?</summary><br><b>
1. Sharing files between containers running in the same Pod 1. Sharing files between containers running in the same Pod
2. Storage in containers is ephemeral - it usually doesn't last for long. For example, when a container crashes, you lose all on-disk data. 2. Storage in containers is ephemeral - it usually doesn't last for long. For example, when a container crashes, you lose all on-disk data. Certain volumes allows to manage such situation by persistent volumes
</b></details> </b></details>
<details> <details>
<summary>Explain ephemeral volume types vs. persistent volumes in regards to Pods</summary><br><b> <summary>Explain ephemeral volume types vs. persistent volumes in regards to Pods</summary><br><b>
Ephemeral volume types have the lifetime of a pod as opposed to persistent volumes which exist beyond the lifetime of a Pod. Ephemeral volume types have the lifetime of a pod as opposed to persistent volumes which exist beyond the lifetime of a Pod.
</b></details>
<details>
<summary>Provide at least one use-case for each of the following volume types:
* emptyDir
* hostPath
</summary><br><b>
* EmptyDir: You need a temporary data that you can afford to lose if the Pod is deleted. For example short-lived data required for one-time operations.
* hostPath: You need access to paths on the host itself (like data from `/sys` or data generated in `/var/lib`)
</b></details> </b></details>
### Networking ### Networking
@ -1396,6 +1611,17 @@ Ephemeral volume types have the lifetime of a pod as opposed to persistent volum
False. By default two Pods in two different namespaces are able to communicate with each other. False. By default two Pods in two different namespaces are able to communicate with each other.
Try it for yourself:
kubectl run test-prod -n prod --image ubuntu -- sleep 2000000000
kubectl run test-dev -n dev --image ubuntu -- sleep 2000000000
`k describe po test-prod -n prod` to get the IP of test-prod Pod.
Access dev Pod: `kubectl exec --stdin --tty test-dev -n dev -- /bin/bash`
And ping the IP of test-prod Pod you get earlier.You'll see that there is communication between the two pods, in two separate namespaces.
</b></details> </b></details>
### Network Policies ### Network Policies
@ -1516,6 +1742,14 @@ False. When a namespace is deleted, the resources in that namespace are deleted
* System processes * System processes
</b></details> </b></details>
<details>
<summary>While namspaces do provide scope for resources, they are not isolating them</summary><br><b>
True. Try create two pods in two separate namspaces for example, and you'll see there is a connection between the two.
</b></details>
#### Namespaces - commands
<details> <details>
<summary>How to list all namespaces?</code></summary><br><b> <summary>How to list all namespaces?</code></summary><br><b>
@ -1595,6 +1829,8 @@ OR
`kubens some-namespace` `kubens some-namespace`
</b></details> </b></details>
#### Resources Quota
<details> <details>
<summary>What is Resource Quota?</code></summary><br><b> <summary>What is Resource Quota?</code></summary><br><b>
@ -1696,16 +1932,6 @@ kubectl api-resources --namespaced=false
kubectl delete pods --field-selector=status.phase!='Running' kubectl delete pods --field-selector=status.phase!='Running'
</b></details> </b></details>
<details>
<summary>What <code>kubectl logs [pod-name]</code> command does?</summary><br><b>
Print the logs for a container in a pod.
</b></details>
<details>
<summary>What <code>kubectl describe pod [pod name] does?</code> command does?</summary><br><b>
Show details of a specific resource or group of resources.
</b></details>
<details> <details>
<summary>How to display the resources usages of pods?</summary><br><b> <summary>How to display the resources usages of pods?</summary><br><b>
@ -1713,7 +1939,7 @@ kubectl top pod
</b></details> </b></details>
<details> <details>
<summary>You suspect one of the pods is having issues, what do you do?</summary><br><b> <summary>Perhaps a general question but, you suspect one of the pods is having issues, you don't know what exactly. What do you do?</summary><br><b>
Start by inspecting the pods status. we can use the command `kubectl get pods` (--all-namespaces for pods in system namespace)<br> Start by inspecting the pods status. we can use the command `kubectl get pods` (--all-namespaces for pods in system namespace)<br>
@ -1724,14 +1950,6 @@ In case we find out there was a temporary issue with the pod or the system, we c
Setting the replicas to 0 will shut down the process. Now start it with `kubectl scale deployment [name] --replicas=1` Setting the replicas to 0 will shut down the process. Now start it with `kubectl scale deployment [name] --replicas=1`
</b></details> </b></details>
<details>
<summary>What the Kubernetes Scheduler does?</summary><br><b>
</b></details>
<details>
<summary>What happens to running pods if if you stop Kubelet on the worker nodes?</summary><br><b>
</b></details>
<details> <details>
<summary>What happens what pods are using too much memory? (more than its limit)</summary><br><b> <summary>What happens what pods are using too much memory? (more than its limit)</summary><br><b>
@ -2073,9 +2291,11 @@ The pod is automatically assigned with the default service account (in the names
<details> <details>
<summary>Explain the sidecar container pattern</summary><br><b> <summary>Explain the sidecar container pattern</summary><br><b>
ion container, there is a sidecar container.
</b></details>
r, the application would not exist. In addition to the application container, there is a sidecar container.
The sidecar pattern is a single-node pattern made up of two containers. The first is the application container. It contains the core logic for the application. In simpler words, when you have a Pod and there is more than one container running in that Pod that supports or complements the application container, it means you use the sidecar pattern.
Without this container, the application would not exist. In addition to the application container, there is a sidecar container.
</b></details> </b></details>
### CronJob ### CronJob
@ -2529,7 +2749,7 @@ True
False. The scheduler tries to find a node that meets the requirements/rules and if it doesn't it will schedule the Pod anyway. False. The scheduler tries to find a node that meets the requirements/rules and if it doesn't it will schedule the Pod anyway.
</b></details> </b></details>
## Taints ### Taints
<details> <details>
<summary>Check if there are taints on node "master"</summary><br><b> <summary>Check if there are taints on node "master"</summary><br><b>
@ -2580,6 +2800,76 @@ Exit and save. The pod should be in Running state now.
`NoExecute`: Appling "NoSchedule" will not evict already running Pods (or other resources) from the node as opposed to "NoExecute" which will evict any already running resource from the Node `NoExecute`: Appling "NoSchedule" will not evict already running Pods (or other resources) from the node as opposed to "NoExecute" which will evict any already running resource from the Node
</b></details> </b></details>
### Resource Limits
<details>
<summary>Explain why one would specify resource limits in regards to Pods</summary><br><b>
* You know how much RAM and/or CPU your app should be consuming and anything above that is not valid
* You would like to make sure that everyone can run their apps in the cluster and resources are not being solely used by one type of application
</b></details>
<details>
<summary>True or False? Resource limits applied on a Pod level meaning, if limits is 2gb RAM and there are two container in a Pod that it's 1gb RAM each</summary><br><b>
False. It's per container and not per Pod.
</b></details>
#### Resources Limits - Commands
<details>
<summary>Check if there are any limits on one of the pods in your cluster</summary><br><b>
`kubectl describe po <POD_NAME> | grep -i limits`
</b></details>
<details>
<summary>Run a pod called "yay" with the image "python" and resources request of 64Mi memory and 250m CPU</summary><br><b>
`kubectl run yay --image=python --dry-run=client -o yaml > pod.yaml`
`vi pod.yaml`
```
spec:
containers:
- image: python
imagePullPolicy: Always
name: yay
resources:
requests:
cpu: 250m
memory: 64Mi
```
`kubectl apply -f pod.yaml`
</b></details>
<details>
<summary>Run a pod called "yay2" with the image "python". Make sure it has resources request of 64Mi memory and 250m CPU and the limits are 128Mi memory and 500m CPU</summary><br><b>
`kubectl run yay2 --image=python --dry-run=client -o yaml > pod.yaml`
`vi pod.yaml`
```
spec:
containers:
- image: python
imagePullPolicy: Always
name: yay2
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 250m
memory: 64Mi
```
`kubectl apply -f pod.yaml`
</b></details>
### Scenarios ### Scenarios
<details> <details>
@ -2611,3 +2901,9 @@ Some ways to debug:
Yes, using taints, we could run the following command and it will prevent from all resources with label "app=web" to be scheduled on node1: `kubectl taint node node1 app=web:NoSchedule` Yes, using taints, we could run the following command and it will prevent from all resources with label "app=web" to be scheduled on node1: `kubectl taint node node1 app=web:NoSchedule`
</b></details> </b></details>
<details>
<summary>You would like to limit the number of resources being used in your cluster. For example no more than 4 replicasets, 2 services, etc. How would you achieve that?</summary><br><b>
Using ResourceQuats
</b></details>

View File

@ -6,13 +6,14 @@
- [AWS](#aws) - [AWS](#aws)
- [Questions](#questions) - [Questions](#questions)
- [Terraform 101](#terraform-101-1) - [Terraform 101](#terraform-101-1)
- [Terraform Hands-On Basics](#terraform-hands-on-basics)
- [Providers](#providers) - [Providers](#providers)
- [Provisioners](#provisioners) - [Provisioners](#provisioners)
- [Modules](#modules) - [Modules](#modules)
- [Variables](#variables) - [Variables](#variables)
- [State](#state) - [State](#state)
- [Import](#import) - [Import](#import)
- [Real Life Scenarios](#real-life-scenarios) - [AWS](#aws-1)
## Exercises ## Exercises
@ -34,15 +35,22 @@
## Questions ## Questions
<a name="questions-terraform-101"></a>
### Terraform 101 ### Terraform 101
<details>
<summary>What is Terraform?</summary><br><b>
[Terraform](https://www.terraform.io/intro): "HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle. Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features."
</b></details>
<details> <details>
<summary>What are the advantages in using Terraform or IaC in general?</summary><br><b> <summary>What are the advantages in using Terraform or IaC in general?</summary><br><b>
- Full automation: In the past, resource creation, modification and removal were handled manually or by using a set of tooling. With Terraform or other IaC technologies, you manage the full lifecycle in an automated fashion.<br> - Full automation: In the past, resource creation, modification and removal were handled manually or by using a set of tooling. With Terraform or other IaC technologies, you manage the full lifecycle in an automated fashion.<br>
- Modular and Reusable: Code that you write for certain purposes can be used and assembled in different ways. You can write code to create resources on a public cloud and it can be shared with other teams who can also use it in their account on the same (or different) cloud><br> - Modular and Reusable: Code that you write for certain purposes can be used and assembled in different ways. You can write code to create resources on a public cloud and it can be shared with other teams who can also use it in their account on the same (or different) cloud><br>
- Improved testing: Concepts like CI can be easily applied on IaC based projects and code snippets. This allow you to test and verify operations beforehand - Improved testing: Concepts like CI can be easily applied on IaC based projects and code snippets. This allow you to test and verify operations beforehand
-
</b></details> </b></details>
<details> <details>
@ -51,27 +59,34 @@
- Declarative: Terraform uses the declarative approach (rather than the procedural one) in order to define end-status of the resources - Declarative: Terraform uses the declarative approach (rather than the procedural one) in order to define end-status of the resources
- No agents: as opposed to other technologies (e.g. Puppet) where you use a model of agent and server, with Terraform you use the different APIs (of clouds, services, etc.) to perform the operations - No agents: as opposed to other technologies (e.g. Puppet) where you use a model of agent and server, with Terraform you use the different APIs (of clouds, services, etc.) to perform the operations
- Community: Terraform has strong community who constantly publishes modules and fixes when needed. This ensures there is good modules maintenance and users can get support quite quickly at any point - Community: Terraform has strong community who constantly publishes modules and fixes when needed. This ensures there is good modules maintenance and users can get support quite quickly at any point
-
</b></details> </b></details>
<details> <details>
<summary>In what language infrastructure in Terraform is defined?</summary><br><b> <summary>What language does Terraform uses?</summary><br><b>
A DSL called "HCL" (Hashiciorp Configuration Language). A declarative language for defining infrastructure.
HCL (Hashiciorp Configuration Language). A declarative language for defining infrastructure.
</b></details> </b></details>
<details> <details>
<summary>What's a typical Terraform workflow?</summary><br><b> <summary>What's a typical Terraform workflow?</summary><br><b>
1. Write Terraform definitions: `.tf` files written in HCL that described the desired infrastructure state 1. Write Terraform definitions: `.tf` files written in HCL that described the desired infrastructure state (and run `terraform init` at the very beginning)
2. Review: With command such as `terraform plan` you can get a glance at what Terraform will perform with the written definitions 2. Review: With command such as `terraform plan` you can get a glance at what Terraform will perform with the written definitions
3. Apply definitions: With the command `terraform apply` Terraform will apply the given definitions, by adding, modifying or removing the resources 3. Apply definitions: With the command `terraform apply` Terraform will apply the given definitions, by adding, modifying or removing the resources
This is a manual process. Most of the time this is automated so user submits a PR/MR to propose terraform changes, there is a process to test these changes and once merged they are applied (`terraform apply`).
</b></details> </b></details>
<details> <details>
<summary>What are some use cases for using Terraform?</summary><br><b> <summary>What are some use cases for using Terraform?</summary><br><b>
- Infra provisioning and management: You need to automated or code your infra so you are able to test it easily, apply it and make any changes necessary.
- Multi-cloud environment: You manage infrastructure on different clouds, but looking for a consistent way to do it across the clouds - Multi-cloud environment: You manage infrastructure on different clouds, but looking for a consistent way to do it across the clouds
- Consistent environments: You manage environments such as test, production, staging, ... and looking for a way to have them consistent so any modification in one of them, applies to other environments as well - Consistent environments: You manage environments such as test, production, staging, ... and looking for a way to have them consistent so any modification in one of them, applies to other environments as well
</b></details> </b></details>
<details> <details>
@ -82,6 +97,15 @@ Terraform is considered to be an IaC technology. It's used for provisioning reso
Ansible, Puppet and Chef are Configuration Management technologies. They are used once there is an instance running and you would like to apply some configuration on it like installing an application, applying security policy, etc. Ansible, Puppet and Chef are Configuration Management technologies. They are used once there is an instance running and you would like to apply some configuration on it like installing an application, applying security policy, etc.
To be clear, CM tools can be used to provision resources so in the end goal of having infrastructure, both Terraform and something like Ansible, can achieve the same result. The difference is in the how. Ansible doesn't saves the state of resources, it doesn't know how many instances there are in your environment as opposed to Terraform. At the same time while Terraform can perform configuration management tasks, it has less modules support for that specific goal and it doesn't track the task execution state as Ansible. The differences are there and it's most of the time recommended to mix the technologies, so Terraform used for managing infrastructure and CM technologies used for configuration on top of that infrastructure. To be clear, CM tools can be used to provision resources so in the end goal of having infrastructure, both Terraform and something like Ansible, can achieve the same result. The difference is in the how. Ansible doesn't saves the state of resources, it doesn't know how many instances there are in your environment as opposed to Terraform. At the same time while Terraform can perform configuration management tasks, it has less modules support for that specific goal and it doesn't track the task execution state as Ansible. The differences are there and it's most of the time recommended to mix the technologies, so Terraform used for managing infrastructure and CM technologies used for configuration on top of that infrastructure.
</b></details>
### Terraform Hands-On Basics
<details>
<summary>How to reference other parts of your Terraform code?</summary><br><b>
Using the syntax <PROVIDER TYPE>.<NAME>.<ATTRIBUTE>
</b></details> </b></details>
### Providers ### Providers
@ -534,8 +558,24 @@ It's does NOT create the definitions/configuration for creating such infrastruct
2. You lost your tfstate file and need to rebuild it 2. You lost your tfstate file and need to rebuild it
</b></details> </b></details>
### Real Life Scenarios ### AWS
<details> <details>
<summary></summary><br><b> <summary>What happens if you update user_data in the following case apply the changes?
```
resource "aws_instance" "example" {
ami = "..."
instance_type = "t2.micro"
user_data = <<-EOF
#!/bin/bash
echo "Hello, World" > index.xhtml
EOF
```
</summary><br><b>
Nothing, because user_data is executed on boot so if an instance is already running, it won't change anything.
To make it effective you'll have to use `user_data_replace_on_change = true`.
</b></details> </b></details>