Add questions on ArgoCD

As well as on other topics.
This commit is contained in:
abregman 2022-09-14 21:39:47 +03:00
parent 3d129216e0
commit e56bf576df
7 changed files with 243 additions and 46 deletions

View File

@ -78,6 +78,7 @@
</tr> </tr>
<tr> <tr>
<td align="center"><a href="topics/kafka/README.md"><img src="images/logos/kafka.png" width="70px;" height="80px;" alt="Kafka"/><br /><b>Kafka</b></a></td> <td align="center"><a href="topics/kafka/README.md"><img src="images/logos/kafka.png" width="70px;" height="80px;" alt="Kafka"/><br /><b>Kafka</b></a></td>
<td align="center"><a href="topics/argo/README.md"><img src="images/logos/argo.png" width="80px;" height="80px;" alt="Argo"/><br /><b>Argo</b></a></td>
</tr> </tr>
</table> </table>
</center> </center>
@ -5548,6 +5549,23 @@ A configuration->deployment which has some advantages like:
2. More immutable infrastructure - with configuration->deployment it's not likely to have very different deployments since most of the configuration is done prior to the deployment. Issues like dependencies errors are handled/discovered prior to deployment in this model. 2. More immutable infrastructure - with configuration->deployment it's not likely to have very different deployments since most of the configuration is done prior to the deployment. Issues like dependencies errors are handled/discovered prior to deployment in this model.
</b></details> </b></details>
## Release
<details>
<summary>Explain Semantic Versioning</summary><br><b>
[This](https://semver.org/) page explains it perfectly:
```
Given a version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible API changes
MINOR version when you add functionality in a backwards compatible manner
PATCH version when you make backwards compatible bug fixes
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
```
</b></details>
## Certificates ## Certificates
If you are looking for a way to prepare for a certain exam this is the section for you. Here you'll find a list of certificates, each references to a separate file with focused questions that will help you to prepare to the exam. Good luck :) If you are looking for a way to prepare for a certain exam this is the section for you. Here you'll find a list of certificates, each references to a separate file with focused questions that will help you to prepare to the exam. Good luck :)

BIN
images/logos/argo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

115
topics/argo/README.md Normal file
View File

@ -0,0 +1,115 @@
# Argo
## ArgoCD Exercises
TODO
## Argo Questions
### ArgoCD 101
<details>
<summary>What is Argo CD?</summary><br><b>
[ArgoCD](https://argo-cd.readthedocs.io/en/stable): "Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes."
As to why Argo CD, they provide the following explanation: "Application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand."
</b></details>
<details>
<summary>There been a lot of CI/CD systems before ArgoCD (Jenkins, Teamcity, CircleCI, etc.) What added value ArgoCD brought?</summary><br><b>
Simply said, ArgoCD is running on Kubernetes, it's part of its ecosystem, as opposed to some other CI/CD systems.
Easier to explain the need for ArgoCD by direct comparison to another CI/CD system. Let's use Jenkins for this.
With Jenkins, you need make sure to install k8s related tools and set access for commands like kubectl.
With ArgoCD you simply need to install it in your namespace but no need to install additional tools as it's part of k8s.
With Jenkins, managing access is usually done per pipeline and even if set globally in Jenkins, you still need to configure each pipeline to use that access configuration.
With ArgoCD access management to k8s and other resources is given as it runs already on the cluster, in one or multiple namespaces.
With Jenkins, tracking the status of what got deployed to k8s can be done only as an extra step, by running the pipeline. This is because Jenkins isn't part of the k8s cluster.
With ArgoCD you get much better tracking and visibility of what gets deployed as it runs in the same cluster and the same namespace.
With ArgoCD it's really easy to roll back to a previous version because all the changes done, are done to git which is a versioned source control. So it's enough to get to a previous commit for ArgoCD to detect a change and sync to the cluster. Worth to mention, this point specifically is true for Jenkins as well :)
</b></details>
<details>
<summary>Describe an example of workflow where ArgoCD is used</summary><br><b>
1. A developer submitted change to an application repository
2. Jenkins pipeline is triggered to run CI on the change
3. If the Jenkins Pipeline completed successfully, build an image out of the new code
4. Push to image to a registry
5. Update K8S manifest file(s) in a separate app config repository
6. ArgoCD tracks changes in the app config repository. Since there was a change in the repository, it will apply the changes from the repo
7.
</b></details>
<details>
<summary>True or False? ArgoCD support Kubernetes YAML files but not other manifests formats like Helm Charts and Kustomize</summary><br><b>
False. It supports Kubernetes YAML files as well as Helm Charts and Kustomize.
</b></details>
<details>
<summary>What "GitOps Repository" means in regards to ArgoCD?</summary><br><b>
It's the repository that holds app configuration, the one updated most of the time by CI/CD processes or DevOps, SRE engineers. In regards to ArgoCD it's the repository ArgoCD tracks for changes and apply them when they are detected.
</b></details>
<details>
<summary>What are the advantages in using GitOps approach/repository?</summary><br><b>
* Your whole configuration is one place, defined as code so it's completely transparent, adjustable for changes and easily reproducible
* Everyone go through the same interface hence you have more people experiencing and testing the code, even if not intentionally
* Engineers can use it for testing, development, ... there is no more running manual commands and hoping to reach the same status as in the cluster/cloud.
*
</b></details>
<details>
<summary>What are the advantages in using GitOps approach/repository?</summary><br><b>
* Your whole configuration is one place, defined as code so it's completely transparent, adjustable for changes and easily reproducible
* Everyone go through the same interface hence you have more people experiencing and testing the code, even if not intentionally
* Engineers can use it for testing, development, ... there is no more running manual commands and hoping to reach the same status as in the cluster/cloud.
* Single source of truth: you know that your GitOps is the repo from which changes can be done to the cluster. So even if someone tries to manually override it, it won't work.
</b></details>
<details>
<summary>Sorina, one of the engineers in your team, made manual changes to the cluster that override some of the configuration in a repo traced by ArgoCD. What will happen?</summary><br><b>
Once Sorina made the modifications, ArgoCD will detect the state diverged and will sync the changes from the GitOps repository, overwriting the manual changes done by Sorina.
</b></details>
<details>
<summary>Nate, one of the engineers in your organization, asked whether it's possible if ArgoCD didn't sync for changes done manually to the cluster. What would be your answer?</summary><br><b>
The answer is yes, it's possible. You can configure ArgoCD to sync to desired state when changes done manually and instead do something like sending alerts.
</b></details>
<details>
<summary>How cluster disaster recovery becomes easier with ArgoCD?</summary><br><b>
Imagine you have a cluster in the cloud, in one of the regions. Something happens to that cluster and it's either crashes or simply no longer opertional.
If you have all your cluster configuration in a GitOps repository, ArgoCD can be pointed to that repository while be configured to use a new cluster you've set up and apply that configuration so your cluster is again up and running with the same status as o
</b></details>
### Access Control
<details>
<summary>What is Argo CD?</code></summary><br><b>
[ArgoCD](https://argo-cd.readthedocs.io/en/stable): "Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes."
As to why Argo CD, they provide the following explanation: "Application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand."
</b></details>

View File

@ -1,32 +1,30 @@
# Containers # Containers
## Exercises - Index - [Containers](#containers)
- [Exercises](#exercises)
- [Running Containers](#running-containers)
- [Images](#images)
- [Misc](#misc)
- [Questions](#questions)
- [Containers 101](#containers-101)
- [Commands Commands](#commands-commands)
- [Images](#images-1)
- [Registry](#registry)
- [Tags](#tags)
- [Containerfile](#containerfile)
- [Storage](#storage)
- [Architecture](#architecture)
- [Docker Architecture](#docker-architecture)
- [Docker Compose](#docker-compose)
- [Networking](#networking)
- [Docker Networking](#docker-networking)
- [Security](#security)
- [Docker in Production](#docker-in-production)
- [OCI](#oci)
- [Scenarios](#scenarios)
* [Running Containers](#exercises-running-containers)
* [Images](#exercises-containers-images)
* [Misc](#exercises-containers-misc)
## Questions - Index ## Exercises
* [Containers 101](#questions-containers-101)
* [Common Commands](#questions-common-commands)
* [Images](#questions-containers-images)
* [Tags](#questions-containers-images-tags)
* [Registry](#questions-containers-images-registry)
* [Storage](#questions-containers-storage)
* [Containerfile](#questions-containerfile)
* [Architecture](#questions-architecture)
* [Docker Architecture](#questions-docker-architecture)
* [Docker Compose](#questions-docker-compose)
* [Networking](#questions-networking)
* [Docker Networking](#questions-docker-networking)
* [Security](#questions-security)
* [Docker In Production](#questions-docker-in-production)
* [Rootless Containers](#questions-rootless-containers)
* [OCI](#questions-oci)
* [Scenarios](#questions-scenarios)
## Containers Exercises
<a name="exercises-running-containers"></a> <a name="exercises-running-containers"></a>
### Running Containers ### Running Containers
@ -514,8 +512,7 @@ True. For mounted files you can use `podman inspec CONTAINER_NAMD/ID`
Registry Registry
</b></details> </b></details>
<a name="questions-containers-images-registry"></a> #### Registry
#### Images - Registry
<details> <details>
<summary>What is a Registry?</summary><br><b> <summary>What is a Registry?</summary><br><b>
@ -582,8 +579,7 @@ You can specify a specific registry: `podman push IMAGE REGISTRY_ADDRESS`
2. Using `podman commit` on a running container after making changes to it 2. Using `podman commit` on a running container after making changes to it
</b></details> </b></details>
<a name="questions-containers-images-tags"></a> #### Tags
#### Images - Tags
<details> <details>
<summary>What are image tags? Why is it recommended to use tags when supporting multiple releases/versions of a project?</summary><br><b> <summary>What are image tags? Why is it recommended to use tags when supporting multiple releases/versions of a project?</summary><br><b>
@ -613,7 +609,7 @@ False. You can run `podman rmi IMAGE:TAG`.
True. True.
</b></details> </b></details>
### Containerfile #### Containerfile
<details> <details>
<summary>What is a Containerfile/Dockerfile?</summary><br><b> <summary>What is a Containerfile/Dockerfile?</summary><br><b>
@ -660,8 +656,6 @@ It specifies the base layer of the image to be used. Every other instruction is
COPY takes in a source and destination. It lets you copy in a file or directory from the build context into the Docker image itself.<br> COPY takes in a source and destination. It lets you copy in a file or directory from the build context into the Docker image itself.<br>
ADD lets you do the same, but it also supports two other sources. You can use a URL instead of a file or directory from the build context. In addition, you can extract a tar file from the source directly into the destination. ADD lets you do the same, but it also supports two other sources. You can use a URL instead of a file or directory from the build context. In addition, you can extract a tar file from the source directly into the destination.
Although ADD and COPY are functionally similar, generally speaking, COPY is preferred. Thats because its more transparent than ADD. COPY only supports the basic copying of files from build context into the container, while ADD has some features (like local-only tar extraction and remote URL support) that are not immediately obvious.
</b></details> </b></details>
<details> <details>

View File

@ -433,6 +433,7 @@ Read more [here](https://about.gitlab.com/topics/gitops)
* It introduces limited/granular access to infrastructure * It introduces limited/granular access to infrastructure
* It makes it easier to trace who makes changes to infrastructure * It makes it easier to trace who makes changes to infrastructure
</b></details> </b></details>
<details> <details>
@ -442,6 +443,14 @@ Read more [here](https://about.gitlab.com/topics/gitops)
* Apply review/approval process for changes * Apply review/approval process for changes
</b></details> </b></details>
<details>
<summary>Two engineers in your team argue on where to put the configuration and infra related files of a certain application. One of them suggests to put it in the same repo as the application repository and the other one suggests to put to put it in its own separate repository. What's your take on that?</summary><br><b>
One might say we need more details as to what these configuration and infra files look like exactly and how complex the application and its CI/CD pipeline(s), but in general, most of the time you will want to put configuration and infra related files in their own separate repository and not in the repository of the application for multiple reasons:
* Every change submitted to the configuration, shouldn't trigger the CI/CD of the application, it should be testing out and applying the modified configuration
</b></details>
#### SRE #### SRE
<details> <details>

View File

@ -14,6 +14,7 @@
- [Services](#services) - [Services](#services)
- [Ingress](#ingress) - [Ingress](#ingress)
- [ReplicaSets](#replicasets) - [ReplicaSets](#replicasets)
- [StatefulSet](#statefulset)
- [Storage](#storage) - [Storage](#storage)
- [Network Policies](#network-policies) - [Network Policies](#network-policies)
- [Configuration File](#configuration-file) - [Configuration File](#configuration-file)
@ -32,6 +33,7 @@
- [Security](#security) - [Security](#security)
- [Troubleshooting Scenarios](#troubleshooting-scenarios) - [Troubleshooting Scenarios](#troubleshooting-scenarios)
- [Istio](#istio) - [Istio](#istio)
- [Controllers](#controllers)
- [Scenarios](#scenarios) - [Scenarios](#scenarios)
## Kubernetes Exercises ## Kubernetes Exercises
@ -558,6 +560,10 @@ The following occurs when you run `kubectl create deployment some_deployment --i
Using a Service. Using a Service.
</b></details> </b></details>
<details>
<summary>Can you use a Deployment for stateful applications?</summary><br><b>
</b></details>
### Services ### Services
<details> <details>
@ -1089,6 +1095,14 @@ A ReplicaSet's purpose is to maintain a stable set of replica Pods running at an
A DaemonSet ensures that all Nodes run a copy of a Pod. A DaemonSet ensures that all Nodes run a copy of a Pod.
</b></details> </b></details>
## StatefulSet
<details>
<summary>Explain StatefulSet</summary><br><b>
StatefulSet is the workload API object used to manage stateful applications. Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.[Learn more](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
</b></details>
### Storage ### Storage
<details> <details>
@ -1475,12 +1489,6 @@ They become candidates to for termination.
False. CPU is a compressible resource while memory is a non compressible resource - once a container reached the memory limit, it will be terminated. False. CPU is a compressible resource while memory is a non compressible resource - once a container reached the memory limit, it will be terminated.
</b></details> </b></details>
<details>
<summary>What is the control loop? How it works?</summary><br><b>
Explained [here](https://www.youtube.com/watch?v=i9V4oCa5f9I)
</b></details>
### Operators ### Operators
<details> <details>
@ -1489,6 +1497,8 @@ Explained [here](https://www.youtube.com/watch?v=i9V4oCa5f9I)
Explained [here](https://kubernetes.io/docs/concepts/extend-kubernetes/operator) Explained [here](https://kubernetes.io/docs/concepts/extend-kubernetes/operator)
"Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. Operators follow Kubernetes principles, notably the control loop." "Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. Operators follow Kubernetes principles, notably the control loop."
In simpler words, you can think about an operator as a custom control loop in Kubernetes.
</b></details> </b></details>
<details> <details>
@ -1502,17 +1512,27 @@ This also help with automating a standard process on multiple Kubernetes cluster
<details> <details>
<summary>What components the Operator consists of?</summary><br><b> <summary>What components the Operator consists of?</summary><br><b>
1. CRD (custom resource definition) 1. CRD (Custom Resource Definition) - You are fanmiliar with Kubernetes resources like Deployment, Pod, Service, etc. CRD is also a resource, but one that you or the developer the operator defines.
2. Controller - Custom control loop which runs against the CRD 2. Controller - Custom control loop which runs against the CRD
</b></details>
<details>
<summary>Explain CRD</summary><br><b>
CRD is Custom Resource Definitions. It's custom Kubernetes component which extends K8s API.
TODO(abregman): add more info.
</b></details> </b></details>
<details> <details>
<summary>How Operator works?</summary><br><b> <summary>How Operator works?</summary><br><b>
It uses the control loop used by Kubernetes in general. It watches for changes in the application state. The difference is that is uses a custom control loop. It uses the control loop used by Kubernetes in general. It watches for changes in the application state. The difference is that is uses a custom control loop.
In additions.
In addition, it also makes use of CRD's (Custom Resources Definitions) so basically it extends Kubernetes API. In addition, it also makes use of CRD's (Custom Resources Definitions) so basically it extends Kubernetes API.
</b></details> </b></details>
<details> <details>
@ -1521,10 +1541,16 @@ In addition, it also makes use of CRD's (Custom Resources Definitions) so basica
True True
</b></details> </b></details>
<details>
<summary>Explain what is the OLM (Operator Lifecycle Manager) and what is it used for</summary><br><b>
</b></details>
<details> <details>
<summary>What is the Operator Framework?</summary><br><b> <summary>What is the Operator Framework?</summary><br><b>
open source toolkit used to manage k8s native applications, called operators, in an automated and efficient way. open source toolkit used to manage k8s native applications, called operators, in an automated and efficient way.
</b></details> </b></details>
<details> <details>
@ -1533,12 +1559,14 @@ open source toolkit used to manage k8s native applications, called operators, in
1. Operator SDK - allows developers to build operators 1. Operator SDK - allows developers to build operators
2. Operator Lifecycle Manager - helps to install, update and generally manage the lifecycle of all operators 2. Operator Lifecycle Manager - helps to install, update and generally manage the lifecycle of all operators
3. Operator Metering - Enables usage reporting for operators that provide specialized services 3. Operator Metering - Enables usage reporting for operators that provide specialized services
4.
</b></details> </b></details>
<details> <details>
<summary>Describe in detail what is the Operator Lifecycle Manager</summary><br><b> <summary>Describe in detail what is the Operator Lifecycle Manager</summary><br><b>
It's part of the Operator Framework, used for managing the lifecycle of operators. It basically extends Kubernetes so a user can use a declarative way to manage operators (installation, upgrade, ...). It's part of the Operator Framework, used for managing the lifecycle of operators. It basically extends Kubernetes so a user can use a declarative way to manage operators (installation, upgrade, ...).
</b></details> </b></details>
<details> <details>
@ -1548,6 +1576,7 @@ It includes:
* catalog-operator - Resolving and installing ClusterServiceVersions the resource they specify. * catalog-operator - Resolving and installing ClusterServiceVersions the resource they specify.
* olm-operator - Deploys applications defined by ClusterServiceVersion resource * olm-operator - Deploys applications defined by ClusterServiceVersion resource
</b></details> </b></details>
<details> <details>
@ -1558,12 +1587,20 @@ Use kubeconfig files to organize information about clusters, users, namespaces,
</b></details> </b></details>
<details> <details>
<summary>Can you use a Deployment for stateful applications?</summary><br><b> <summary>Would you use Helm, Go or something else for creating an Operator?</summary><br><b>
Depends on the scope and maturity of the Operator. If it mainly covers installation and upgrades, Helm might be enough. If you want to go for Lifecycle management, insights and auto-pilot, this is where you'd probably use Go.
</b></details> </b></details>
<details> <details>
<summary>Explain StatefulSet</summary><br><b> <summary>Are there any tools, projects you are using for building Operators?</summary><br><b>
StatefulSet is the workload API object used to manage stateful applications. Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.[Learn more](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
This one is based more on a personal experience and taste...
* Operator Framework
* Kubebuilder
* Controller Runtime
...
</b></details> </b></details>
### Secrets ### Secrets
@ -1994,7 +2031,7 @@ Same as Conftest, it is used for policy testing and enforcement. The difference
<details> <details>
<summary>What is Helm?</summary><br><b> <summary>What is Helm?</summary><br><b>
Package manager for Kubernetes. Basically the ability to package YAML files and distribute them to other users and apply them in different clusters. Package manager for Kubernetes. Basically the ability to package YAML files and distribute them to other users and apply them in the cluster(s).
</b></details> </b></details>
<details> <details>
@ -2003,6 +2040,8 @@ Package manager for Kubernetes. Basically the ability to package YAML files and
Sometimes when you would like to deploy a certain application to your cluster, you need to create multiple YAML files/components like: Secret, Service, ConfigMap, etc. This can be tedious task. So it would make sense to ease the process by introducing something that will allow us to share these bundle of YAMLs every time we would like to add an application to our cluster. This something is called Helm. Sometimes when you would like to deploy a certain application to your cluster, you need to create multiple YAML files/components like: Secret, Service, ConfigMap, etc. This can be tedious task. So it would make sense to ease the process by introducing something that will allow us to share these bundle of YAMLs every time we would like to add an application to our cluster. This something is called Helm.
A common scenario is having multiple Kubernetes clusters (prod, dev, staging). Instead of individually applying different YAMLs in each cluster, it makes more sense to create one Chart and install it in every cluster. A common scenario is having multiple Kubernetes clusters (prod, dev, staging). Instead of individually applying different YAMLs in each cluster, it makes more sense to create one Chart and install it in every cluster.
Another scenario is, you would like to share what you've created with the community. For people and companies to easily deploy your application in their cluster.
</b></details> </b></details>
<details> <details>
@ -2120,6 +2159,22 @@ TODO: finish this...
Istio is an open source service mesh that helps organizations run distributed, microservices-based apps anywhere. Istio enables organizations to secure, connect, and monitor microservices, so they can modernize their enterprise apps more swiftly and securely. Istio is an open source service mesh that helps organizations run distributed, microservices-based apps anywhere. Istio enables organizations to secure, connect, and monitor microservices, so they can modernize their enterprise apps more swiftly and securely.
</b></details> </b></details>
### Controllers
<details>
<summary>What is the control loop? How it works?</summary><br><b>
Explained [here](https://www.youtube.com/watch?v=i9V4oCa5f9I)
</b></details>
<details>
<summary>What are all the phases/steps of a control loop?</summary><br><b>
- Observe - identify the cluster current state
- Diff - Identify whether a diff exists between current state and desired state
- Act - Bring current cluster state to the desired state (basically reach a state where there is no diff)
</b></details>
### Scenarios ### Scenarios
<details> <details>

View File

@ -276,15 +276,21 @@ resource "aws_instance" "tf_aws_instance" {
<details> <details>
<summary>How do you test a terraform module?</summary><br><b> <summary>How do you test a terraform module?</summary><br><b>
Many examples are acceptable, but the most common answer would likely to be using the tool <code>terratest</code>, and to test that a module can be initialized, can create resources, and can destroy those resources cleanly. There are multiple answers, but the most common answer would likely to be using the tool <code>terratest</code>, and to test that a module can be initialized, can create resources, and can destroy those resources cleanly.
</b></details> </b></details>
<details> <details>
<summary>Where can you obtain Terraform modules?<summary><br><b> <summary>Where can you obtain Terraform modules?</summary><br><b>
Terraform modules can be found at the [Terrafrom registry](https://registry.terraform.io/browse/modules) Terraform modules can be found at the [Terrafrom registry](https://registry.terraform.io/browse/modules)
</b></details> </b></details>
<details>
<summary>There's a discussion in your team whether to store modules in one centralized location/repository or have them in each of the projects/repositories where they are used. What's your take on that?</summary><br><b>
You might have a different opinion but my personal take on that, is to keep modules in one centralized repository as any maintenance or updates to the module you need to perform, are done in one place instead of multiple times in different repositories.
</b></details>
### Variables ### Variables
<details> <details>