Add a couple of questions

Mostly Kubernetes and containers.
This commit is contained in:
abregman 2021-10-10 14:46:45 +03:00
parent 4f2662f611
commit b245c2adf2
3 changed files with 477 additions and 92 deletions

535
README.md
View File

@ -4651,6 +4651,12 @@ Desktop virtualization
Yes, it's a operating-system-level virtualization, where the kernel is shared and allows to use multiple isolated user-spaces instances. Yes, it's a operating-system-level virtualization, where the kernel is shared and allows to use multiple isolated user-spaces instances.
</b></details> </b></details>
<details>
<summary>How the introduction of virtual machines changed the industry and the way applications were deployed?</summary><br><b>
The introduction of virtual machines allowed companies to deploy multiple business applications on the same hardware while each application is separated from each other in secured way, where each is running on its own separate operating system.
</b></details>
## Ansible ## Ansible
### Ansible Exercises ### Ansible Exercises
@ -5620,7 +5626,8 @@ resource "aws_instance" "tf_aws_instance" {
|Name|Topic|Objective & Instructions|Solution|Comments| |Name|Topic|Objective & Instructions|Solution|Comments|
|--------|--------|------|----|----| |--------|--------|------|----|----|
|My First Dockerfile|Dockerfile|[Link](exercises/write_dockerfile_run_container.md)|[Link](exercises/write_dockerfile_run_container.md) |My First Dockerfile|Dockerfile|[Exercise](exercises/write_dockerfile_run_container.md)|[Solution](exercises/write_dockerfile_run_container.md)
|Working with Images|Image|[Exercise](exercises/containers/working_with_images.md)|[Solution](exercises/containers/solutions/working_with_images.md)
### Containers Self Assesment ### Containers Self Assesment
@ -5649,9 +5656,10 @@ The primary difference between containers and VMs is that containers allow you t
multiple workloads on a single operating system while in the case of VMs, the hardware is being virtualized to run multiple machines each with its own guest OS. multiple workloads on a single operating system while in the case of VMs, the hardware is being virtualized to run multiple machines each with its own guest OS.
You can also think about it as containers are for OS-level virtualization while VMs are for hardware virtualization. You can also think about it as containers are for OS-level virtualization while VMs are for hardware virtualization.
* Containers don't require an entire guest operating system as VMs. Containers share the system's kernel as opposed to VMs. They isolate themselves via the use of namespaces and cgroups * Containers don't require an entire guest operating system as VMs. Containers share the system's kernel as opposed to VMs. They isolate themselves via the use of kernel's features such as namespaces and cgroups
* It usually takes a few seconds to set up a container as opposed to VMs which can take minutes or at least more time than containers as there is an entire OS to boot and initialize as opposed to containers which has share of the underlying OS * It usually takes a few seconds to set up a container as opposed to VMs which can take minutes or at least more time than containers as there is an entire OS to boot and initialize as opposed to containers which has share of the underlying OS
* Virtual machines considered to be more secured than containers * Virtual machines considered to be more secured than containers
* VMs portability considered to be limited when compared to containers
</b></details> </b></details>
<details> <details>
@ -5669,7 +5677,12 @@ You should choose containers when:
<details> <details>
<summary>What is the OCI?</summary><br><b> <summary>What is the OCI?</summary><br><b>
OCI (Open Container Initiative) is an open governance established in 2015 to standardize container creation - format, runtime, etc. At that time there were a number of parties involved and the most prominent one was Docker. OCI (Open Container Initiative) is an open governance established in 2015 to standardize container creation - mostly image format and runtime. At that time there were a number of parties involved and the most prominent one was Docker.
Specifications published by OCI:
- [image-spec](https://github.com/opencontainers/image-spec)
- [runtime-spec](https://github.com/opencontainers/runtime-spec)
</b></details> </b></details>
<details> <details>
@ -5697,27 +5710,10 @@ Through the use of namespaces and cgroups. Linux kernel has several types of nam
<details> <details>
<summary>Describe in detail what happens when you run `podman/docker run hello-world`?</summary><br><b> <summary>Describe in detail what happens when you run `podman/docker run hello-world`?</summary><br><b>
Docker CLI passes your request to Docker daemon. Docker/Podman CLI passes your request to Docker daemon.
Docker daemon downloads the image from Docker Hub Docker/Podman daemon downloads the image from Docker Hub
Docker daemon creates a new container by using the image it downloaded Docker/Podman daemon creates a new container by using the image it downloaded
Docker daemon redirects output from container to Docker CLI which redirects it to the standard output Docker/Podman daemon redirects output from container to Docker CLI which redirects it to the standard output
</b></details>
<details>
<summary>What are `dockerd, docker-containerd, docker-runc, docker-containerd-ctr, docker-containerd-shim` ?</summary><br><b>
dockerd - The Docker daemon itself. The highest level component in your list and also the only 'Docker' product listed. Provides all the nice UX features of Docker.
(docker-)containerd - Also a daemon, listening on a Unix socket, exposes gRPC endpoints. Handles all the low-level container management tasks, storage, image distribution, network attachment, etc...
(docker-)containerd-ctr - A lightweight CLI to directly communicate with containerd. Think of it as how 'docker' is to 'dockerd'.
(docker-)runc - A lightweight binary for actually running containers. Deals with the low-level interfacing with Linux capabilities like cgroups, namespaces, etc...
(docker-)containerd-shim - After runC actually runs the container, it exits (allowing us to not have any long-running processes responsible for our containers). The shim is the component which sits between containerd and runc to facilitate this.
![alt text](https://i.stack.imgur.com/lAtSR.png "Docker Process")
</b></details> </b></details>
<details> <details>
@ -5742,6 +5738,43 @@ Multiple namespaces: pid,net, mnt, uts, ipc, user
</b></details> </b></details>
#### Containers - Docker Architecture
<details>
<summary>Which components/layers compose the Docker technology?</summary><br><b>
1. Runtime - responsible for starting and stopping containers
2. Daemon/Engine - implements the runtime, API, ...
3. Orchestrator
</b></details>
<details>
<summary>What is the low-level runtime?</summary><br><b>
- The low level runtime is called runc
- It manages every container running on Docker host
- Its purpose is to interact with the underlying OS to start and stop containers
- Its reference implementation is of the OCI (Open Containers Initiative)
</b></details>
<details>
<summary>What is the high-level runtime?</summary><br><b>
- The high level runtime is called containerd
- It manages the whole lifecycle of a container - network interfaces, pulling images, ...
- It manages the lower level runtime (runc) instances
- It's used both by Docker and Kubernetes as a container runtime
Note: running `ps -ef | grep -i containerd` on a system with Docker installed and running, you should see a process of containerd
</b></details>
<details>
<summary>True or False? The docker daemon (dockerd) performs lower-level tasks compared to containerd</summary><br><b>
False. The Docker daemon performs higher-level tasks compared to containerd.<br>
It's responsible for managing networks, volumes, images, ...
</b></details>
<details> <details>
<summary>Describe in detail what happens when you run `docker pull image:tag`?</summary><br><b> <summary>Describe in detail what happens when you run `docker pull image:tag`?</summary><br><b>
Docker CLI passes your request to Docker daemon. Dockerd Logs shows the process Docker CLI passes your request to Docker daemon. Dockerd Logs shows the process
@ -5755,7 +5788,6 @@ pulling blob \"sha256:61c5ed1cbdf8e801f3b73d906c61261ad916b2532d6756e7c4fbcacb97
Applying tar in /var/lib/docker/overlay2/507df36fe373108f19df4b22a07d10de7800f33c9613acb139827ba2645444f7/diff" storage-driver=overlay2 Applying tar in /var/lib/docker/overlay2/507df36fe373108f19df4b22a07d10de7800f33c9613acb139827ba2645444f7/diff" storage-driver=overlay2
Applied tar sha256:514c3a3e64d4ebf15f482c9e8909d130bcd53bcc452f0225b0a04744de7b8c43 to 507df36fe373108f19df4b22a07d10de7800f33c9613acb139827ba2645444f7, size: 1223534 Applied tar sha256:514c3a3e64d4ebf15f482c9e8909d130bcd53bcc452f0225b0a04744de7b8c43 to 507df36fe373108f19df4b22a07d10de7800f33c9613acb139827ba2645444f7, size: 1223534
</b></details> </b></details>
<details> <details>
@ -5798,7 +5830,37 @@ Create a new image from a containers changes
3. docker rm $(docker ps -a -q) - This command will delete all stopped containers. The command docker ps -a -q will return all existing container IDs and pass them to the rm command which will delete them. Any running containers will not be deleted. 3. docker rm $(docker ps -a -q) - This command will delete all stopped containers. The command docker ps -a -q will return all existing container IDs and pass them to the rm command which will delete them. Any running containers will not be deleted.
</b></details> </b></details>
##### Dockerfile <details>
<summary>How the Docker client communicates with the daemon?</summary><br><b>
Via the local socket at `/var/run/docker.sock`
</b></details>
#### Containers - Images
<details>
<summary>What is a container image?</summary><br><b>
An image of a container contains the application, its dependencies and the operating system where the application is executed.<br>
TODO: add more details
</b></details>
<details>
<summary>How to list the container images on certain host?</summary><br><b>
`podman image ls`<br>
`docker image ls`
Depends on which containers technology you use.
</b></details>
<details>
<summary>How to retrieve the latest ubuntu image?</summary><br><b>
`docker image pull ubuntu:latest`
</b></details>
#### Containers - Dockerfile
<details> <details>
<summary>What is Dockerfile</summary><br><b> <summary>What is Dockerfile</summary><br><b>
@ -5938,15 +6000,21 @@ Read more [here](https://www.redhat.com/en/topics/containers/what-is-a-kubernete
<details> <details>
<summary>Do you have experience with deploying a Kubernetes cluster? If so, can you describe the process in high-level?</summary><br><b> <summary>Do you have experience with deploying a Kubernetes cluster? If so, can you describe the process in high-level?</summary><br><b>
1. Create multiple instances you will use as Kubernetes nodes/workers. Create also an instance to act as the Master. The instances can be part of the cloud or virtual machines on physical hosts. 1. Create multiple instances you will use as Kubernetes nodes/workers. Create also an instance to act as the Master. The instances can be provisioned in a cloud or they can be virtual machines on bare metal hosts.
2. <TODO> 2. Provision a certificate authority that will be used to generate TLS certificates for the different components of a Kubernetes cluster (kubelet, etcd, ...)
1. Generate a certificate and private key for the different components
3. Generate kubeconfigs so the different clients of Kubernetes can locate the API servers and authenticate.
4. Generate encryption key that will be used for encrypting the cluster data
5. Create an etcd cluster
</b></details> </b></details>
<details> <details>
<summary>When or why NOT to use Kubernetes?</summary><br><b> <summary>When or why NOT to use Kubernetes?</summary><br><b>
- If you deploy applications using containers and you need to manage scaling, rolling out updates, etc. You probably want to use Kubernetes - If you are big team of engineers (e.g. 200) deploying applications using containers and you need to manage scaling, rolling out updates, etc. You probably want to use Kubernetes
- If you manage low level infrastructure or baremetals, Kubernetes is probably not what you need or want - If you manage low level infrastructure or baremetals, Kubernetes is probably not what you need or want
- If you are a small team (e.g. 20-50 engineers) Kubernetes might be an overkill (even if you need scale, rolling out updates, etc.)
</b></details> </b></details>
<details> <details>
@ -5968,6 +6036,13 @@ Read more [here](https://www.redhat.com/en/topics/containers/what-is-a-kubernete
metadata, kind and apiVersion metadata, kind and apiVersion
</b></details> </b></details>
<details>
<summary>What actions or operations you consider as best practices when it comes to Kuberentes?</summary><br><b>
- Always make sure Kubernetes YAML files are valid. Applying automated checks and pipelines is recommended.
- Always specify requests and limits to prevent situation where containers are using the entire cluster memory which may lead to OOM issue
</b></details>
<details> <details>
<summary>What is kubectl?</summary><br><b> <summary>What is kubectl?</summary><br><b>
@ -6029,6 +6104,18 @@ False. A Kubernetes cluster consists of at least 1 master and can have 0 workers
<img src="images/kubernetes/kubernetes_components_solution.png"/> <img src="images/kubernetes/kubernetes_components_solution.png"/>
</b></details> </b></details>
<details>
<summary>You are managing multiple Kubernetes clusters. How do you quickly change between the clusters using kubectl?</summary><br><b>
`kubectl config use-context`
</b></details>
<details>
<summary>How do you prevent high memory usage in your Kubernetes cluster and possibly issues like memory leak and OOM?</summary><br><b>
Apply requests and limits, especially on third party applications (where the uncertainty is even bigger)
</b></details>
#### Kubernetes - Pods #### Kubernetes - Pods
<details> <details>
@ -6244,6 +6331,12 @@ You can read more about it in [kubernetes.io](https://kubernetes.io/docs/tasks/c
Only containers whose state set to Success will be able to receive requests sent to the Service. Only containers whose state set to Success will be able to receive requests sent to the Service.
</b></details> </b></details>
<details>
<summary>Why it's usually considered better to include one container per Pod?</summary><br><b>
One reason is that it makes it harder to scale, when you need to scale only one of the containers in a given Pod.
</b></details>
#### Kubernetes - Deployments #### Kubernetes - Deployments
<details> <details>
@ -6315,7 +6408,7 @@ In simpler words, it allows you to add an internal or external connectivity to a
</b></details> </b></details>
<details> <details>
<summary>True or False? The lifecycle of Pods and Services isn't connected so when a pod dies, the service still stays </summary><br><b> <summary>True or False? The lifecycle of Pods and Services isn't connected so when a Pod dies, the Service still stays </summary><br><b>
True True
</b></details> </b></details>
@ -6340,7 +6433,9 @@ The default is ClusterIP and it's used for exposing a port internally. It's usef
<details> <details>
<summary>How to get information on a certain service?</summary><br><b> <summary>How to get information on a certain service?</summary><br><b>
kubctl describe service <SERVICE_NAME> `kubctl describe service <SERVICE_NAME>`
It's more common to use `kubectl describe svc ...`
</b></details> </b></details>
<details> <details>
@ -6351,13 +6446,50 @@ kubectl expose rs some-replicaset --name=replicaset-svc --target-port=2017 --typ
``` ```
</summary><br><b> </summary><br><b>
It exposes a ReplicaSet by creating a service called 'replicaset-svc'. The exposed port is 2017 and the service type is NodePort which means it will be reachable externally. It exposes a ReplicaSet by creating a service called 'replicaset-svc'. The exposed port is 2017 (this is the port used by the application) and the service type is NodePort which means it will be reachable externally.
</b></details>
<details>
<summary>True or False? the target port, in the case of running the following command, will be exposed only on one of the Kubernetes cluster nodes but it will routed to all the pods
```
kubectl expose rs some-replicaset --name=replicaset-svc --target-port=2017 --type=NodePort
```
</summary><br><b>
False. It will be exposed on every node of the cluster and will be routed to one of the Pods (which belong to the ReplicaSet)
</b></details> </b></details>
<details> <details>
<summary>How to verify that a certain service configured to forward the requests to a given pod</summary><br><b> <summary>How to verify that a certain service configured to forward the requests to a given pod</summary><br><b>
Run `kubectl describe service` and if the IPs from "Endpoints" match any IPs from the output of `kubectl get pod -o wide` Run `kubectl describe service` and see if the IPs from "Endpoints" match any IPs from the output of `kubectl get pod -o wide`
</b></details>
<details>
<summary>Explain what will happen when running apply on the following block
```
apiVersion: v1
kind: Service
metadata:
name: some-app
spec:
type: NodePort
ports:
- port: 8080
nodePort: 2017
protocol: TCP
selector:
type: backend
service: some-app
```
</summary><br><b>
It creates a new Service of the type "NodePort" which means it can be used for internal and external communication with the app.<br>
The port of the application is 8080 and the requests will forwarded to this port. The exposed port is 2017. As a note, this is not a common practice, to specify the nodePort.<br>
The port used TCP (instead of UDP) and this is also the default so you don't have to specify it.<br>
The selector used by the Service to know to which Pods to forward the requests. In this case, Pods with the label "type: backend" and "service: some-app".<br>
</b></details> </b></details>
<details> <details>
@ -6395,6 +6527,62 @@ spec:
Ingress Ingress
</b></details> </b></details>
<details>
<summary>True or False? When "NodePort" is used, "ClusterIP" will be created automatically?</summary><br><b>
True
</b></details>
<details>
<summary>When would you use the "LoadBalancer" type</summary><br><b>
Mostly when you would like to combine it with cloud provider's load balancer
</b></details>
<details>
<summary>How would you map a service to an external address?</summary><br><b>
Using the 'ExternalName' directive.
</b></details>
<details>
<summary>Describe in detail what happens when you create a service</summary><br><b>
1. Kubectl sends a request to the API server to create a Service
2. The controller detects there is a new Service
3. Endpoint objects created with the same name as the service, by the controller
4. The controller is using the Service selector to identify the endpoints
5. kube-proxy detects there is a new endpoint object + new service and adds iptables rules to capture traffic to the Service port and redirect it to endpoints
6. kube-dns detects there is a new Service and adds the container record to the dns server
</b></details>
<details>
<summary>How to list the endpoints of a certain app?</summary><br><b>
`kubectl get ep <name>`
</b></details>
<details>
<summary>How can you find out information on a Service related to a certain Pod if all you can use is <code>kubectl exec <POD_NAME> -- </code></summary><br><b>
You can run `kubectl exec <POD_NAME> -- env` which will give you a couple environment variables related to the Service.<br>
Variables such as `[SERVICE_NAME]_SERVICE_HOST`, `[SERVICE_NAME]_SERVICE_PORT`, ...
</b></details>
<details>
<summary>Describe what happens when a container tries to connect with its corresponding Service for the first time. Explain who added each of the components you include in your description</summary><br><b>
- The container looks at the nameserver defined in /etc/resolv.conf
- The container queries the nameserver so the address is resolved to the Service IP
- Requests sent to the Service IP are forwarded with iptables rules (or other chosen software) to the endpoint(s).
Explanation as to who added them:
- The nameserver in the container is added by kubelet during the scheduling of the Pod, by using kube-dns
- The DNS record of the service is added by kube-dns during the Service creation
- iptables rules are added by kube-proxy during Endpoint and Service creation
</b></details>
#### Kubernetes - Ingress #### Kubernetes - Ingress
<details> <details>
@ -6432,6 +6620,7 @@ spec:
``` ```
</b></details> </b></details>
<details> <details>
<summary>Explain the meaning of "http", "host" and "backend" directives <summary>Explain the meaning of "http", "host" and "backend" directives
@ -6456,6 +6645,12 @@ the http line used for specifying that incoming requests will be forwarded to th
backend is referencing the internal service (serviceName is the name under metadata and servicePort is the port under the ports section). backend is referencing the internal service (serviceName is the name under metadata and servicePort is the port under the ports section).
</b></details> </b></details>
<details>
<summary>Why using a wildcard in ingress host may lead to issues?</summary><br><b>
The reason you should not wildcard value in a host (like `- host: *`) is because you basically tell your Kubernetes cluster to forward all the traffic to the container where you used this ingress. This may cause the entire cluster to go down.
</b></details>
<details> <details>
<summary>What is Ingress Controller?</summary><br><b> <summary>What is Ingress Controller?</summary><br><b>
@ -6653,6 +6848,22 @@ True. When the label, used by a ReplicaSet in the selector field, removed from a
kubectl scale deploy <DEPLOYMENT_NAME> --replicas=8 kubectl scale deploy <DEPLOYMENT_NAME> --replicas=8
</b></details> </b></details>
<details>
<summary>ReplicaSets are running the moment the user executed the command to create them (like <code>kubectl create -f rs.yaml</code>)</summary><br><b>
False. It can take some time, depends on what exactly you are running. To see if they are up and running, run `kubectl get rs` and watch the 'READY' column.
</b></details>
<details>
<summary>How to expose a ReplicaSet as a new service?</summary><br><b>
`kubectl expose rs <ReplicaSet Name> --name=<Service Name> --target-port=<Port to expose> --type=NodePort`
Few notes:
- the target port depends on which port the app is using in the container
- type can be different and doesn't has to be specifically "NodePort"
</b></details>
#### Kubernetes - Storage #### Kubernetes - Storage
<details> <details>
@ -6727,12 +6938,12 @@ YAML
</b></details> </b></details>
<details> <details>
<summary>Where Kubernetes gets the status data (which is added to the configuration file) from?</summary><br><b> <summary>Where Kubernetes cluster stores the cluster state?</summary><br><b>
etcd etcd
</b></details> </b></details>
#### Kubernetes etcd #### Kubernetes - etcd
<details> <details>
<summary>What is etcd?</summary><br><b> <summary>What is etcd?</summary><br><b>
@ -6756,6 +6967,10 @@ True
True True
</b></details> </b></details>
<details>
<summary>Why etcd? Why not some SQL or NoSQL database?</summary><br><b>
</b></details>
#### Kubernetes - Namespaces #### Kubernetes - Namespaces
<details> <details>
@ -7317,16 +7532,74 @@ The pod is automatically assigned with the default service account (in the names
#### Kubernetes - Patterns #### Kubernetes - Patterns
<details>
<summary>Which containers pattern is used in the following drawing?</summary><br><b>
</b></details>
<details> <details>
<summary>Explain the sidecar container pattern</summary><br><b> <summary>Explain the sidecar container pattern</summary><br><b>
</b></details> </b></details>
#### Kubernetes - CronJob
<details>
<summary>Explain what is CronJob and what is it used for</summary><br><b>
</b></details>
<details>
<summary>What possible issue can arise from using the following spec and how to fix it?
```
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-cron-job
spec:
schedule: '*/1 * * * *'
startingDeadlineSeconds: 10
concurrencyPolicy: Allow
```
</summary><br><b>
If the cron job fails, the next job will not replace the previous one due to the "concurrencyPolicy" value which is "Allow". It will keep spawning new jobs and so eventually the system will be filled with failed cron jobs.
To avoid such problem, the "concurrencyPolicy" value should be either "Replace" or "Forbid".
</b></details>
<details>
<summary>What issue might arise from using the following CronJob and how to fix it?
```
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: "some-cron-job"
spec:
schedule: '*/1 * * * *'
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
```
</summary><br><b>
The following lines placed under the template:
```
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
```
As a result this configuration isn't part of the cron job spec hence the cron job has no limits which can cause issues like OOM and potentially lead to API server being down.<br>
To fix it, these lines should placed in the spec of the cron job, above or under the "schedule" directive in the above example.
</b></details>
#### Kubernetes - Misc #### Kubernetes - Misc
<details>
<summary>Explain Imperative Management vs. Declarative Management</summary><br><b>
</b></details>
<details> <details>
<summary>Explain what Kubernetes Service Discovery means</summary><br><b> <summary>Explain what Kubernetes Service Discovery means</summary><br><b>
</b></details> </b></details>
@ -7364,7 +7637,7 @@ Overall it's good for:
</b></details> </b></details>
<details> <details>
<summary>Trur or False? Sensitive data, like credentials, should be stored in a ConfigMap</summary><br><b> <summary>True or False? Sensitive data, like credentials, should be stored in a ConfigMap</summary><br><b>
False. Use secret. False. Use secret.
</b></details> </b></details>
@ -7414,11 +7687,6 @@ View more [here](https://www.youtube.com/watch?v=rDCWxkvPlAw)
<summary> How are labels and selectors used?</summary><br><b> <summary> How are labels and selectors used?</summary><br><b>
</b></details> </b></details>
<details>
<summary>Explain what is CronJob and what is it used for</summary><br><b>
</b></details>
<details> <details>
<summary>What QoS classes are there?</summary><br><b> <summary>What QoS classes are there?</summary><br><b>
@ -7439,7 +7707,36 @@ View more [here](https://www.youtube.com/watch?v=rDCWxkvPlAw)
<summary>What is Kubeconfig?</summary><br><b> <summary>What is Kubeconfig?</summary><br><b>
</b></details> </b></details>
#### Helm #### Kubernetes - Gatekeeper
<details>
<summary>What is Gatekeeper?</summary><br><b>
[Gatekeeper docs](https://open-policy-agent.github.io/gatekeeper/website/docs): "Gatekeeper is a validating (mutating TBA) webhook that enforces CRD-based policies executed by Open Policy Agent"
</b></details>
<details>
<summary>Explain how Gatekeeper works</summary><br><b>
On every request sent to the Kubernetes cluster, Gatekeeper sends the policies and the resources to OPA (Open Policy Agent) to check if it violates any policy. If it does, Gatekeeper will return the policy error message back. If it isn't violates any policy, the request will reach the cluster.
</b></details>
#### Kubernetes - Policy Testing
<details>
<summary>What is Conftest?</summary><br><b>
Conftest allows you to write tests against structured files. You can think of it as tests library for Kubernetes resources.<br>
It is mostly used in testing environments such as CI pipelines or local hooks.
</b></details>
<details>
<summary>What is Datree? How is it different from Conftest?</summary><br><b>
Same as Conftest, it is used for policy testing and enforcement. The difference is that it comes with built-in policies.
</b></details>
#### Kubernetes - Helm
<details> <details>
<summary>What is Helm?</summary><br><b> <summary>What is Helm?</summary><br><b>
@ -9650,11 +9947,19 @@ Alert manager is responsible for alerts ;)
<details> <details>
<summary>How do you know if a certain directory is a git repository?</summary><br><b> <summary>How do you know if a certain directory is a git repository?</summary><br><b>
You can check if there is a ".git" directory inside it. You can check if there is a ".git" directory.
</b></details> </b></details>
<details> <details>
<summary>How to check if a file is tracked and if not, then track it?</summary><br><b> <summary>Explain the following: <code>git directory</code>, <code>working directory</code> and <code>staging area</code></summary><br><b>
This answer taken from [git-scm.com](https://git-scm.com/book/en/v1/Getting-Started-Git-Basics#_the_three_states)
"The Git directory is where Git stores the meta data and object database for your project. This is the most important part of Git, and it is what is copied when you clone a repository from another computer.
The working directory is a single checkout of one version of the project. These files are pulled out of the compressed database in the Git directory and placed on disk for you to use or modify.
The staging area is a simple file, generally contained in your Git directory, that stores information about what will go into your next commit. Its sometimes referred to as the index, but its becoming standard to refer to it as the staging area."
</b></details> </b></details>
<details> <details>
@ -9670,15 +9975,44 @@ a separate branch in your local repository
</b></details> </b></details>
<details> <details>
<summary>Explain the following: <code>git directory</code>, <code>working directory</code> and <code>staging area</code></summary><br><b> <summary>How to check if a file is tracked and if not, then track it?</summary><br><b>
The Git directory is where Git stores the meta data and object database for your project. This is the most important part of Git, and it is what is copied when you clone a repository from another computer. There are different ways to check whether a file is tracked or not:
The working directory is a single checkout of one version of the project. These files are pulled out of the compressed database in the Git directory and placed on disk for you to use or modify. - `git ls-file <file>` -> exit code of 0 means it's tracked
- `git blame <file>`
...
</b></details>
The staging area is a simple file, generally contained in your Git directory, that stores information about what will go into your next commit. Its sometimes referred to as the index, but its becoming standard to refer to it as the staging area. <details>
<summary>How can you see which changes have done before committing them?</summary><br><b>
This answer taken from [git-scm.com](https://git-scm.com/book/en/v1/Getting-Started-Git-Basics#_the_three_states) `git diff```
</b></details>
<details>
<summary>What <code>git status</code> does?</summary><br><b>
</b></details>
<details>
<summary>You have two branches - main and devel. How do you make sure devel is in sync with main?</summary><br><b>
```
git checkout main
git pull
git checkout devel
git merge main
```
</b></details>
#### Git - Merge
<details>
<summary>You have two branches - main and devel. How do you put devel into main?</summary><br><b>
git checkout main
git merge devel
git push origin main
</b></details> </b></details>
<details> <details>
@ -9693,28 +10027,6 @@ Finally, you run `git rebase --continue`
</p> </p>
</b></details> </b></details>
<details>
<summary>What is the difference between <code>git reset</code> and <code>git revert</code>?</summary><br><b>
<p>
`git revert` creates a new commit which undoes the changes from last commit.
`git reset` depends on the usage, can modify the index or change the commit which the branch head
is currently pointing at.
</p>
</b></details>
<details>
<summary>You would like to move forth commit to the top. How would you achieve that?</summary><br><b>
Using the `git rebase` command
</b></details>
<details>
<summary>In what situations are you using <code>git rebase</code>?</summary><br><b>
</b></details>
<details> <details>
<summary>What merge strategies are you familiar with?</summary><br><b> <summary>What merge strategies are you familiar with?</summary><br><b>
@ -9729,9 +10041,38 @@ This page explains it the best: https://git-scm.com/docs/merge-strategies
</b></details> </b></details>
<details> <details>
<summary>How can you see which changes have done before committing them?</summary><br><b> <summary>Explain Git octopus merge</summary><br><b>
`git diff``` Probably good to mention that it's:
* It's good for cases of merging more than one branch (and also the default of such use cases)
* It's primarily meant for bundling topic branches together
This is a great article about Octopus merge: http://www.freblogg.com/2016/12/git-octopus-merge.html
</b></details>
<details>
<summary>What is the difference between <code>git reset</code> and <code>git revert</code>?</summary><br><b>
<p>
`git revert` creates a new commit which undoes the changes from last commit.
`git reset` depends on the usage, can modify the index or change the commit which the branch head
is currently pointing at.
</p>
</b></details>
#### Git - Rebase
<details>
<summary>You would like to move forth commit to the top. How would you achieve that?</summary><br><b>
Using the `git rebase` command
</b></details>
<details>
<summary>In what situations are you using <code>git rebase</code>?</summary><br><b>
</b></details> </b></details>
<details> <details>
@ -9796,17 +10137,6 @@ If you would like to also discard the changes you `git reset --hard``
False. If you would like to keep a file on your filesystem, use `git reset <file_name>` False. If you would like to keep a file on your filesystem, use `git reset <file_name>`
</b></details> </b></details>
<details>
<summary>Explain Git octopus merge</summary><br><b>
Probably good to mention that it's:
* It's good for cases of merging more than one branch (and also the default of such use cases)
* It's primarily meant for bundling topic branches together
This is a great article about Octopus merge: http://www.freblogg.com/2016/12/git-octopus-merge.html
</b></details>
## Go ## Go
<details> <details>
@ -11746,7 +12076,7 @@ HTTP Header Injection vulnerabilities occur when user input is insecurely includ
A buffer overflow (or buffer overrun) occurs when the volume of data exceeds the storage capacity of the memory buffer. As a result, the program attempting to write the data to the buffer overwrites adjacent memory locations. A buffer overflow (or buffer overrun) occurs when the volume of data exceeds the storage capacity of the memory buffer. As a result, the program attempting to write the data to the buffer overwrites adjacent memory locations.
</b></details> </b></details>
##### Containers #### Security - Containers
<details> <details>
<summary>What security measures are you taking when dealing with containers?</summary><br><b> <summary>What security measures are you taking when dealing with containers?</summary><br><b>
@ -12218,6 +12548,9 @@ It's resolved in this order:
* PTR * PTR
* MX * MX
* AAAA * AAAA
...
A more detailed list, can be found [here](https://www.nslookup.io/learning/dns-record-types)
</b></details> </b></details>
<details> <details>
@ -12352,14 +12685,32 @@ It's an architecture in which data is and retrieved from a single, non-shared, s
7. The browser then sends subsequent requests as needed to the server to get the embedded links, javascript, images in the HTML and then steps 3 to 5 are repeated. 7. The browser then sends subsequent requests as needed to the server to get the embedded links, javascript, images in the HTML and then steps 3 to 5 are repeated.
</b></details> </b></details>
<details> #### API
<summary>Define or Explain what is an API</summary><br><b>
I like this definition from [here](https://blog.christianposta.com/microservices/api-gateways-are-going-through-an-identity-crisis): <details>
<summary>Explain what is an API</summary><br><b>
I like this definition from [blog.christianposta.com](https://blog.christianposta.com/microservices/api-gateways-are-going-through-an-identity-crisis):
"An explicitly and purposefully defined interface designed to be invoked over a network that enables software developers to get programmatic access to data and functionality within an organization in a controlled and comfortable way." "An explicitly and purposefully defined interface designed to be invoked over a network that enables software developers to get programmatic access to data and functionality within an organization in a controlled and comfortable way."
</b></details> </b></details>
<details>
<summary>What is an API specification?</summary><br><b>
From [swagger.io](https://swagger.io/resources/articles/difference-between-api-documentation-specification):
"An API specification provides a broad understanding of how an API behaves and how the API links with other APIs. It explains how the API functions and the results to expect when using the API"
</b></details>
<details>
<summary>True or False? API Definition is the same as API Specification</summary><br><b>
False. From [swagger.io](https://swagger.io/resources/articles/difference-between-api-documentation-specification):
"An API definition is similar to an API specification in that it provides an understanding of how an API is organized and how the API functions. But the API definition is aimed at machine consumption instead of human consumption of APIs."
</b></details>
<details> <details>
<summary>What is Automation? How it's related or different from Orchestration?</summary><br><b> <summary>What is Automation? How it's related or different from Orchestration?</summary><br><b>

View File

@ -0,0 +1,17 @@
## Working with Images - Solution
### Objective
Learn how to work with containers images
### Requirements
Make sure Podman, Docker (or any other containers engine) is installed on your system
### Instructions
1. List the containers images in your environment - `podman image ls`
2. Pull the latest ubuntu image - `podman image pull ubuntu:latest`
3. Run a container with the image you just pulled - `podman container run -it ubuntu:latest /bin/bash`
4. Remove the image. Did it work? - No. There is a running container which is using the image we try to remove
5. Do whatever is needed in order to remove the image - `podman rm <container_id>; podman image rm ubuntu`

View File

@ -0,0 +1,17 @@
## Working with Images
### Objective
Learn how to work with containers images
### Requirements
Make sure Podman, Docker (or any other containers engine) is installed on your system
### Instructions
1. List the containers images in your environment
2. Pull the latest ubuntu image
3. Run a container with the image you just pulled
4. Remove the image. Did it work?
5. Do whatever is needed in order to remove the image