diff --git a/README.md b/README.md index 43acaea..fa1174e 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ :information_source:  This repo contains questions and exercises on various technical topics, sometimes related to DevOps and SRE -:bar_chart:  There are currently **2386** exercises and questions +:bar_chart:  There are currently **2393** exercises and questions :books:  To learn more about DevOps and SRE, check the resources in [devops-resources](https://github.com/bregman-arie/devops-resources) repository diff --git a/topics/aws/README.md b/topics/aws/README.md index 40d4268..c99993c 100644 --- a/topics/aws/README.md +++ b/topics/aws/README.md @@ -146,6 +146,12 @@ Failover | Route 53 | [Exercise](exercises/route_53_failover/exercise.md) | [Sol |--------|--------|------|----|----| | Simple Elastic Beanstalk Node.js app | Elastic Beanstalk | [Exercise](exercises/elastic_beanstalk_simple/exercise.md) | [Solution](exercises/elastic_beanstalk_simple/solution.md) | | +### CodePipeline + +|Name|Topic|Objective & Instructions|Solution|Comments| +|--------|--------|------|----|----| +| Basic CI with S3 | CodePipeline & S3 | [Exercise](exercises/basic_s3_ci/exercise.md) | [Solution](exercises/basic_s3_ci/solution.md) | | + ### Misc |Name|Topic|Objective & Instructions|Solution|Comments| @@ -1626,6 +1632,12 @@ Lifecycle hooks allows you perform extra steps before the instance goes in servi Lifecycle hooks in pending state. +
+Describe one way to test ASG actually works
+ +In Linux instnaces, you can install the 'stress' package and run stress to load the system for certain period of time and see if ASG kicks in by adding additional capacity (= more instances). +
+ ### Security
diff --git a/topics/aws/exercises/basic_s3_ci/exercise.md b/topics/aws/exercises/basic_s3_ci/exercise.md new file mode 100644 index 0000000..c88f710 --- /dev/null +++ b/topics/aws/exercises/basic_s3_ci/exercise.md @@ -0,0 +1,9 @@ +# Basic CI with S3 + +## Objectives + +1. Create a new S3 bucket +2. Add to the bucket index.html file and make it a static website +3. Create a GitHub repo and put the index.html there +4. Make sure to connect your AWS account to GitHub +5. Create a CI pipeline in AWS to publish the updated index.html from GitHub every time someone makes a change to the repo, to a specific branch diff --git a/topics/aws/exercises/basic_s3_ci/solution.md b/topics/aws/exercises/basic_s3_ci/solution.md new file mode 100644 index 0000000..6cd3188 --- /dev/null +++ b/topics/aws/exercises/basic_s3_ci/solution.md @@ -0,0 +1,54 @@ +# Basic CI with S3 + +## Objectives + +1. Create a new S3 bucket +2. Add to the bucket index.html file and make it a static website +3. Create a GitHub repo and put the index.html there +4. Make sure to connect your AWS account to GitHub +5. Create a CI pipeline in AWS to publish the updated index.html from GitHub every time someone makes a change to the repo, to a specific branch + +## Solution + +### Manual + +#### Create S3 bucket + +1. Go to S3 service in AWS console +2. Insert bucket name and choose region +3. Uncheck "block public access" to make it public +4. Click on "Create bucket" + +#### Static website hosting + +1. Navigate to the newly created bucket and click on "properties" tab +2. Click on "Edit" in "Static Website Hosting" section +3. Check "Enable" for "Static web hosting" +4. Set "index.html" as index document and "error.html" as error document. + +#### S3 bucket permissions + +1. Click on "Permissions" tab in the newly created S3 bucket +2. Click on Bucket Policy -> Edit -> Policy Generator. Click on "Generate Policy" for "GetObject" +3. Copy the generated policy and go to Permissions tab and replace it with the current policy + +#### GitHub Source + +1. Go to Developers Tools Console and create a new connection (GitHub) + +#### Create a CI pipeline + +1. Go to CodePipeline in AWS console +2. Click on "Create Pipeline" -> Insert a pipeline name -> Click on Next +3. Choose the newly created source (GitHub) under sources +4. Select repository name and branch name +5. Select "AWS CodeBuild" as build provider +6. Select "Managed Image", "standard" runtime and "new service role" +7. In deploy stage choose the newly created S3 bucket and for deploy provider choose "Amazon S3" +8. Review the pipeline and click on "Create pipeline" + +#### Test the pipeline + +1. Clone the project from GitHub +2. Make changes to index.html and commit them (git commit -a) +3. Push the new change, verify that the newly created AWS pipeline was triggered and check the content of the site diff --git a/topics/kubernetes/README.md b/topics/kubernetes/README.md index f349771..995940a 100644 --- a/topics/kubernetes/README.md +++ b/topics/kubernetes/README.md @@ -1,8 +1,6 @@ -## Kubernetes +# Kubernetes -### Kubernetes Exercises - -#### Developer & "Regular" User Path +## Kubernetes Exercises |Name|Topic|Objective & Instructions|Solution|Comments| |--------|--------|------|----|----| @@ -13,7 +11,7 @@ | Operating ReplicaSets | ReplicaSet | [Exercise](replicaset_02.md) | [Solution](solutions/replicaset_02_solution.md) | ReplicaSets Selectors | ReplicaSet | [Exercise](replicaset_03.md) | [Solution](solutions/replicaset_03_solution.md) -### Kubernetes Self Assessment +## Kubernetes Questions * [Kubernetes 101](#kubernetes-101) * [Kubernetes Hands-On Basics](#kubernetes-hands-on-basiscs) @@ -22,28 +20,17 @@ * [Kubernetes Deployments](#kubernetes-deployments) * [Kubernetes Services](#kubernetes-services) - -#### Kubernetes 101 +## Kubernetes 101
What is Kubernetes? Why organizations are using it?
-Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. +Kubernetes is an open-source system that provides users with the ability to manage, scale and deploy containerized applications. To understand what Kubernetes is good for, let's look at some examples: - -#### Kubernetes 101 - -
-What is Kubernetes? Why organizations are using it?
- -Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. - -To understand what Kubernetes is good for, let's look at some examples: - -* You would like to run a certain application in a container on multiple different locations. Sure, if it's 2-3 servers/locations, you can do it by yourself but it can be challenging to scale it up to additional multiple location.
-* Performing updates and changes across hundreds of containers
+* You would like to run a certain application in a container on multiple different locations and sync changes across all of them, no matter where they run +* Performing updates and changes across hundreds of containers * Handle cases where the current load requires to scale up (or down)
@@ -51,7 +38,7 @@ To understand what Kubernetes is good for, let's look at some examples: When or why NOT to use Kubernetes?
- If you manage low level infrastructure or baremetals, Kubernetes is probably not what you need or want - - If you are a small team (like less than 20 engineers) running less than a dozen of containers, Kubernetes might be an overkill (even if you need scale, rolling out updates, etc.). You might still enjoy the benefits of using managed Kubernetes, but you definitely want to think about it carefully before making a decision + - If you are a small team (like less than 20 engineers) running less than a dozen of containers, Kubernetes might be an overkill (even if you need scale, rolling out updates, etc.). You might still enjoy the benefits of using managed Kubernetes, but you definitely want to think about it carefully before making a decision on whether to adopt it.
@@ -87,13 +74,6 @@ To understand what Kubernetes is good for, let's look at some examples: metadata, kind and apiVersion
-
-What actions or operations you consider as best practices when it comes to Kubernetes?
- - - Always make sure Kubernetes YAML files are valid. Applying automated checks and pipelines is recommended. - - Always specify requests and limits to prevent situation where containers are using the entire cluster memory which may lead to OOM issue -
-
What is kubectl?
@@ -108,6 +88,19 @@ Kubectl is the Kubernetes command line tool that allows you to run commands agai * Ingress: route traffic from outside the cluster
+
+Why there is no such command in Kubernetes? kubectl get containers
+ +Becaused container is not a Kubernetes object. The smallest object unit in Kubernetes is a Pod. In a single Pod you can find one or more containers. +
+ +
+What actions or operations you consider as best practices when it comes to Kubernetes?
+ + - Always make sure Kubernetes YAML files are valid. Applying automated checks and pipelines is recommended. + - Always specify requests and limits to prevent situation where containers are using the entire cluster memory which may lead to OOM issue +
+ #### Kubernetes - Cluster @@ -150,7 +143,7 @@ False. A Kubernetes cluster consists of at least 1 master and can have 0 workers
-What are the components of the master node?
+What are the components of the master node (aka control plane)?
* API Server - the Kubernetes API. All cluster components communicate through it * Scheduler - assigns an application with a worker node it can run on @@ -159,7 +152,7 @@ False. A Kubernetes cluster consists of at least 1 master and can have 0 workers
-What are the components of a worker node?
+What are the components of a worker node (aka data plane)?
* Kubelet - an agent responsible for node communication with the master. * Kube-proxy - load balancing traffic between app components @@ -196,8 +189,13 @@ Apply requests and limits, especially on third party applications (where the unc 5. Create an etcd cluster
- -#### Kubernetes - Pods +
+Which command will list all the object types in a cluster?
+ +`kubectl api-resources` +
+ +#### Pods
Explain what is a Pod
@@ -413,13 +411,32 @@ Only containers whose state set to Success will be able to receive requests sent
-Why it's usually considered better to include one container per Pod?
+Why it's common to have only one container per Pod in most cases?
-One reason is that it makes it harder to scale, when you need to scale only one of the containers in a given Pod. +One reason is that it makes it harder to scale when you need to scale only one of the containers in a given Pod.
- -#### Kubernetes - Deployments +
+True or False? Once a Pod is assisgned to a worker node, it will only run on that node, even if it fails at some point and spins up a new Pod
+ +True. +
+ +
+True or False? Each Pod, when created, gets its own public IP address
+ +False. Each Pod gets an IP address but an internal one and not publicly accessible. + +To make a Pod externally accessible, we need to use an object called Service in Kubernetes. +
+ +
+How to check to which worker node the pods were scheduled to?
+ +`kubectl get pods -o wide` +
+ +#### Deployments
What is a "Deployment" in Kubernetes?
@@ -433,6 +450,10 @@ A Deployment is a declarative statement for the desired state for Pods and Repli
How to create a deployment?
+`kubectl create deployment my_first_deployment --image=nginx:alpine` + +OR + ``` cat << EOF | kubectl create -f - apiVersion: v1 @@ -442,11 +463,19 @@ metadata: spec: containers: - name: nginx - image: nginx + image: nginx:alpine EOF ```
+
+How to verify a deployment was created?
+ +`kubectl get deployments` + +This command lists all the Deployment objects created and exist in the cluster. It doesn't mean the deployments are readt and running. This can be checked with the "READY" and "AVAILABLE" columns. +
+
How to edit a deployment?
@@ -464,7 +493,7 @@ Also, when looking at the replicaset, you'll see the old replica doesn't have an
How to delete a deployment?
-One way is by specifying the deployment name: `kubectl delete deployment [deployment_name]` +One way is by specifying the deployment name: `kubectl delete deployment [deployment_name]`
Another way is using the deployment configuration file: `kubectl delete -f deployment.yaml`
@@ -474,34 +503,59 @@ Another way is using the deployment configuration file: `kubectl delete -f deplo The pod related to the deployment will terminate and the replicaset will be removed.
+
+What happens behind the scenes when you create a Deployment object?
+ +The following occurs when you run `kubectl create deployment some_deployment --image=nginx` + +1. HTTP request sent to kubernetes API server on the cluster to create a new deployment +2. A new Pod object is created and scheduled to one of the workers nodes +3. Kublet on the worker node notices the new Pod and instructs the Container runtime engine to pull the image from the registry +4. A new container is created using the image that was just pulled +
+
How make an app accessible on private or external network?
Using a Service.
+### Services + +
+What is a Service in Kubernetes?
+ +"An abstract way to expose an application running on a set of Pods as a network service." - read more [here](https://kubernetes.io/docs/concepts/services-networking/service)
+ +In simpler words, it allows you to add an internal or external connectivity to a certain application running in a container. +
+ +
+How to create a service for an existing deployment called "alle" on port 8080 so the Pod(s) accessible via a Load Balancer?
+ +The imperative way: + +`kubectl expose deployment alle --type=LoadBalancer --port 8080` +
+
An internal load balancer in Kubernetes is called ____ and an external load balancer is called ____
An internal load balancer in Kubernetes is called Service and an external load balancer is Ingress
- -#### Kubernetes - Services - -
-What is a Service in Kubernetes?
- -"An abstract way to expose an application running on a set of Pods as a network service." - read more [here](https://kubernetes.io/docs/concepts/services-networking/service)
-In simpler words, it allows you to add an internal or external connectivity to a certain application running in a container. -
-
True or False? The lifecycle of Pods and Services isn't connected so when a Pod dies, the Service still stays
True
+
+After creating a service, how to check it was created?
+ +`kubectl get svc` +
+
What Service types are there?
@@ -522,7 +576,7 @@ The truth is they aren't connected. Service points to Pod(s) directly, without c
What are important steps in defining/adding a Service?
-1. Making sure that targetPort of the Service is matching the containerPort of the POd +1. Making sure that targetPort of the Service is matching the containerPort of the Pod 2. Making sure that selector matches at least one of the Pod's labels
@@ -685,7 +739,21 @@ Explanation as to who added them: - iptables rules are added by kube-proxy during Endpoint and Service creation
-#### Kubernetes - Ingress +
+Describe in high level what happens when you run kubctl expose deployment remo --type=LoadBalancer --port 8080
+ +1. Kubectl sends a request to Kubernetes API to create a Service object +2. Kubernetes asks the cloud provider (e.g. AWS, GCP, Azure) to provision a load balancer +3. The newly created load balancer forwards incoming traffic to relevant worker node(s) which forwards the traffic to the relevant containers +
+ +
+After creating a service that forwards incoming external traffic to the containerized application, how to make sure it works?
+ +You can run `curl :` to examine the output. +
+ +### Ingress
What is Ingress?
@@ -812,6 +880,26 @@ True Network Policies
+
+How to scale an application (deplyoment) so it runs more than one instance of the application?
+ +To run two instances of the applicaation? + +`kubectl scale deployment --replicas=2` + +You can speciy any other number, given that your application knows how to scale. +
+ +### ReplicaSets + +
+What is the purpose of ReplicaSet?
+ +[kubernetes.io](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset): "A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods." + +In simpler words, a ReplicaSet will ensure the specified number of Pods replicas is running for a selected Pod. If there are more Pods than defined in the ReplicaSet, some will be removed. If there are less than what is defined in the ReplicaSet then, then more replicas will be added. +
+
What the following block of lines does? @@ -835,16 +923,6 @@ spec: It defines a replicaset for Pods whose type is set to "backend" so at any given point of time there will be 2 concurrent Pods running.
-#### Kubernetes - ReplicaSets - -
-What is the purpose of ReplicaSet?
- -[kubernetes.io](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset): "A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods." - -In simpler words, a ReplicaSet will ensure the specified number of Pods replicas is running for a selected Pod. If there are more Pods than defined in the ReplicaSet, some will be removed. If there are less than what is defined in the ReplicaSet then, then more replicas will be added. -
-
What will happen when a Pod, created by ReplicaSet, is deleted directly with kubectl delete po ...?
@@ -1280,7 +1358,7 @@ kubectl run nginx --image=nginx --restart=Never --port 80 --expose
-How to get list of resources which are not in a namespace?
+How to get list of resources which are not bound to a specific namespace?
kubectl api-resources --namespaced=false