From ab61a49f84ed2831c904decb9e6202c6e3772ac9 Mon Sep 17 00:00:00 2001 From: abregman Date: Wed, 1 Sep 2021 01:02:32 +0300 Subject: [PATCH] Add a couple of questions and exercises SSIA --- README.md | 319 ++++++++++++++---- exercises/aws/hello_function.md | 3 + exercises/aws/solutions/hello_function.md | 49 +++ exercises/aws/solutions/url_function.md | 71 ++++ exercises/aws/url_function.md | 3 + exercises/git/solutions/squashing_commits.md | 49 +++ exercises/git/squashing_commits.md | 19 ++ exercises/kubernetes/killing_containers.md | 12 + .../solutions/killing_containers.md | 12 + 9 files changed, 473 insertions(+), 64 deletions(-) create mode 100644 exercises/aws/hello_function.md create mode 100644 exercises/aws/solutions/hello_function.md create mode 100644 exercises/aws/solutions/url_function.md create mode 100644 exercises/aws/url_function.md create mode 100644 exercises/git/solutions/squashing_commits.md create mode 100644 exercises/git/squashing_commits.md create mode 100644 exercises/kubernetes/killing_containers.md create mode 100644 exercises/kubernetes/solutions/killing_containers.md diff --git a/README.md b/README.md index 3d05cf6..0539a10 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ :information_source:  This repo contains questions and exercises on various technical topics, sometimes related to DevOps and SRE :) -:bar_chart:  There are currently **1657** questions +:bar_chart:  There are currently **1701** questions :books:  To learn more about DevOps and SRE, check the resources in [devops-resources](https://github.com/bregman-arie/devops-resources) repository @@ -855,6 +855,17 @@ False. Auto scaling adjusts capacity and this can mean removing some resources b ## AWS +### AWS Exercises + +#### AWS Lambda + +|Name|Topic|Objective & Instructions|Solution|Comments| +|--------|--------|------|----|----| +| Hello Function | Lambda | [Exercise](exercises/aws/hello_function.md) | [Solution](exercises/aws/solutions/hello_function.md) | | +| URL Function | Lambda | [Exercise](exercises/aws/url_function.md) | [Solution](exercises/aws/solutions/url_function.md) | | + +### AWS Self Assessment + #### AWS - Global Infrastructure
@@ -946,7 +957,7 @@ There can be several reasons for that. One of them is lack of policy. To solve t Only a login access.
-#### AWS Compute +#### AWS - Compute
What is EC2?
@@ -1030,7 +1041,13 @@ Scheduled RI - launch within time windows you reserve Learn more about EC2 RI [here](https://aws.amazon.com/ec2/pricing/reserved-instances)
-#### AWS Serverless Compute +
+You would like to invoke a function every time you enter a URL in the browser. Which service would you use for that?
+ +AWS Lambda +
+ +#### AWS - Lambda
Explain what is AWS Lambda
@@ -1057,6 +1074,12 @@ False. Charges are being made when the code is executed. - Python, Ruby, Go
+
+True or False? Basic lambda permissions allow you only to upload logs to Amazon CloudWatch Logs
+ +True +
+ #### AWS Containers
@@ -1668,6 +1691,12 @@ Trusted Advisor AWS Snowball
+
+Which service would you use if you need a data warehouse?
+ +AWS RedShift +
+
Which service provides a virtual network dedicated to your AWS account?
@@ -1728,6 +1757,12 @@ SNS AWS Athena
+
+What would you use for preparing and combining data for analytics or ML?
+ +AWS Glue +
+
Which service would you use for monitoring malicious activity and unauthorized behavior in regards to AWS accounts and workloads?
@@ -1806,6 +1841,19 @@ Amazon S3 Transfer Acceleration Route 53
+
+Which services are involved in getting a custom string (based on the input) when inserting a URL in the browser?
+ +Lambda - to define a function that gets an input and returns a certain string
+API Gateway - to define the URL trigger (= when you insert the URL, the function is invoked). +
+ +
+Which service would you use for data or events streaming?
+ +Kinesis +
+ #### AWS DNS
@@ -4955,6 +5003,10 @@ def cap(self, string): What is Ansible Collections?
+
+What is the difference between `include_task` and `import_task`?
+
+
File '/tmp/exercise' includes the following content @@ -5317,6 +5369,22 @@ As such, tfstate shouldn't be stored in git repositories. secured storage such a - Don't edit it manually. tfstate was designed to be manipulated by terraform and not by users directly. - Store it in secured location (since it can include credentials and sensitive data in general) - Backup it regularly so you can roll-back easily when needed + - Store it in remote shared storage. This is especially needed when working in a team and the state can be updated by any of the team members + - Enabled versioning if the storage where you store the state file, supports it. Versioning is great for backups and roll-backs in case of an issue. +
+ +
+How and why concurrent edits of the state file should be avoided?
+ +If there are two users or processes concurrently editing the state file it can result in invalid state file that doesn't actually represents the state of resources.
+ +To avoid that, Terraform can apply state locking if the backend supports that. For example, AWS s3 supports state locking and consistency via DynamoDB. Often, if the backend support it, Terraform will make use of state locking automatically so nothing is required from the user to activate it. +
+ +
+Describe how you manage state file(s) when you have multiple environments (e.g. development, staging and production)
+ +There is no right or wrong here, but it seems that the overall preferred way is to have a dedicated state file per environment.
@@ -5436,7 +5504,6 @@ The Terraform Registry provides a centralized location for official and communit |--------|--------|------|----|----| |My First Dockerfile|Dockerfile|[Link](exercises/write_dockerfile_run_container.md)|[Link](exercises/write_dockerfile_run_container.md) - ### Containers Self Assesment
@@ -5685,6 +5752,7 @@ Because each container has its own writable container layer, and all changes are |Name|Topic|Objective & Instructions|Solution|Comments| |--------|--------|------|----|----| | My First Pod | Pods | [Exercise](exercises/kubernetes/pods_01.md) | [Solution](exercises/kubernetes/solutions/pods_01_solution.md) +| "Killing" Containers | Pods | [Exercise](exercises/kubernetes/killing_containers.md) | [Solution](exercises/kubernetes/solutions/killing_containers.md) | Creating a service | Service | [Exercise](exercises/kubernetes/services_01.md) | [Solution](exercises/kubernetes/solutions/services_01_solution.md) ### Kubernetes Self Assesment @@ -5705,7 +5773,6 @@ To understand what Kubernetes is good for, let's look at some examples: What is a Kubernetes Cluster?
Red Hat Definition: "A Kubernetes cluster is a set of node machines for running containerized applications. If you’re running Kubernetes, you’re running a cluster. - At a minimum, a cluster contains a worker node and a master node." Read more [here](https://www.redhat.com/en/topics/containers/what-is-a-kubernetes-cluster) @@ -5730,8 +5797,8 @@ Read more [here](https://www.redhat.com/en/topics/containers/what-is-a-kubernete
What is a Node?
-A node is a virtual machine or a physical server that serves as a worker for running the applications. -It's recommended to have at least 3 nodes in Kubernetes production environment. +A node is a virtual or a physical machine that serves as a worker for running the applications.
+It's recommended to have at least 3 nodes in a production environment.
@@ -5744,12 +5811,6 @@ The master coordinates all the workflows in the cluster: * Rolling out new updates
-
-What do we need the worker nodes for?
- -The workers are the nodes which run the applications and workloads (Pods and containers). -
-
What is kubectl?
@@ -5797,8 +5858,7 @@ False. A Kubernetes cluster consists of at least 1 master and can have 0 workers
Explain what is a Pod
-A Pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. - +A Pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.
Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
@@ -5807,7 +5867,15 @@ Pods are the smallest deployable units of computing that you can create and mana `kubectl run my-pod --image=nginx:alpine --restart=Never` -If you are a Kubernetes beginner you should know that this is not a common way to run Pods. The common way is to run a Deployment which in turn runs Pod(s). +If you are a Kubernetes beginner you should know that this is not a common way to run Pods. The common way is to run a Deployment which in turn runs Pod(s).
+In addition, Pods and/or Deployments are usually defined in files rather than executed directly using only the CLI arguments. +
+ +
+What are your thoughts on "Pods are not meant to be created directly"?
+ +Pods are usually indeed not created directly. You'll notice that Pods are usually created as part of another entities such as Deployments or ReplicaSets.
+If a Pod dies, Kubernetes will not bring it back. This is why it's more useful for example to define ReplicaSets that will make sure that a given number of Pods will always run, even after a certain Pod dies.
@@ -5819,7 +5887,8 @@ A pod can include multiple containers but in most cases it would probably be one
What use cases exist for running multiple containers in a single pod?
-A web application with separate (= in their own containers) logging and monitoring components/adapters. +A web application with separate (= in their own containers) logging and monitoring components/adapters is one examples.
+A CI/CD pipeline (using Tekton for example) can run multiple containers in one Pod if a Task contains multiple commands.
@@ -5832,12 +5901,6 @@ A web application with separate (= in their own containers) logging and monitori * Pending - Containers are not yet running (Perhaps images are still being downloaded or the pod wasn't scheduled yet)
-
-What does it mean when one says "pods are ephemeral"?
- -It means they would eventually die and pods are unable to heal so it is recommended that you don't create them directly. -
-
True or False? By default, pods are isolated. This means they are unable to receive traffic from any source
@@ -5862,6 +5925,12 @@ False. "Pending" is after the Pod was accepted by the cluster, but the container `kubectl get pods --all-namespaces`
+
+True or False? A single Pod can be split across multiple nodes
+ +False. A single Pod can run on a single node. +
+
How to delete a pod?
@@ -5884,6 +5953,12 @@ False. "Pending" is after the Pod was accepted by the cluster, but the container Read more about it [here](https://kubernetes.io/docs/tasks/configure-pod-container/static-pod)
+
+True or False? A volume defined in Pod can be accessed by all the containers of that Pod
+ +True. +
+
What happens when you run a Pod?
@@ -5908,6 +5983,7 @@ Read more about it [here](https://kubernetes.io/docs/tasks/configure-pod-contain After running kubectl run database --image mongo you see the status is "CrashLoopBackOff". What could possibly went wrong and what do you do to confirm?
"CrashLoopBackOff" means the Pod is starting, crashing, starting...and so it repeats itself.
+There are many different reasons to get this error - lack of permissions, init-container misconfiguration, persistent volume connection issue, etc. One of the ways to check why it happened it to run `kubectl describe po ` and having a look at the exit code @@ -5917,7 +5993,40 @@ One of the ways to check why it happened it to run `kubectl describe po `. This will provide us with the logs from the containers running in that Pod. +Another way to check what's going on, is to run `kubectl logs `. This will provide us with the logs from the containers running in that Pod. +
+ +
+Explain the purpose of the following lines + +``` +livenessProbe: + exec: + command: + - cat + - /appStatus + initialDelaySeconds: 10 + periodSeconds: 5 +``` +
+ +These lines make use of `liveness probe`. It's used to restart a container when it reaches a non-desired state.
+In this case, if the command `cat /appStatus` fails, Kubernetes will kill the container and will apply the restart policy. The `initialDelaySeconds: 10` means that Kubelet will wait 10 seconds before running the command/probe for the first time. From that point on, it will run it every 5 seconds, as defined with `periodSeconds` +
+ +
+Explain the purpose of the following lines + +``` +readinessProbe: + tcpSocket: + port: 2017 + initialDelaySeconds: 15 + periodSeconds: 20 +``` +
+ +They define a readiness probe where the Pod will not be marked as "Ready" before it will be possible to connect to port 2017 of the container. The first check/probe will start after 15 seconds from the moment the container started to run and will continue to run the check/probe every 20 seconds until it will manage to connect to the defined port.
@@ -5927,6 +6036,38 @@ It wasn't able to pull the image specified for running the container(s). This ca More details can be obtained with `kubectl describe po `.
+
+What happens when you delete a Pod?
+ +1. The `TERM` signal is sent to kill the main processes inside the containers of the given Pod +2. Each container is given a period of 30 seconds to shut down the processes gracefully +3. If the grace period expires, the `KILL` signal is used to kill the processes forcefully and the containers as well +
+ +
+Explain liveness probes
+ +Liveness probes is a useful mechanism used for restarting the container when a certain check/probe, the user has defined, fails.
+For example, the user can define that the command `cat /app/status` will run every X seconds and the moment this command fails, the container will be restarted. + +You can read more about it in [kubernetes.io](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes) +
+ +
+Explain readiness probes
+ +readiness probes used by Kubelet to know when a container is ready to start running, accepting traffic.
+For example, a readiness probe can be to connect port 8080 on a container. Once Kubelet manages to connect it, the Pod is marked as ready + +You can read more about it in [kubernetes.io](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes) +
+ +
+How readiness probe status affect Services when they are combined?
+ +Only containers whose state set to Success will be able to receive requests sent to the Service. +
+ #### Kubernetes - Deployments
@@ -6185,6 +6326,67 @@ True Network Policies
+
+What the following block of lines does? + +``` +spec: + replicas: 2 + selector: + matchLabels: + type: backend + template: + metadata: + labels: + type: backend + spec: + containers: + - name: httpd-yup + image: httpd +``` +
+ +It defines a replicaset for Pods whose type is set to "backend" so at any given point of time there will be 2 concurrent Pods running. +
+ +#### Kubernetes - ReplicaSets + +
+What is the purpose of ReplicaSet?
+ +A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods. +
+ +
+How a ReplicaSet works?
+ +A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria. + +A ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template. +
+ +
+What happens when a replica dies?
+
+ +
+What is the default number of replicas if not specified?
+ +1 +
+ +
+How to list all the ReplicaSets?
+ +kubectl get rs +
+ +
+What happens when for example the value of replicas is 2 but there are more than 2 Pods running that match the selector?
+ +It will terminate some of them in order to reach a state where only 2 Pods are running. +
+ #### Kubernetes - Network Policies
@@ -6669,27 +6871,7 @@ It includes: Explain StatefulSet
-#### Kubernetes ReplicaSet - -
-What is the purpose of ReplicaSet?
- -A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods. -
- -
-How a ReplicaSet works?
- -A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria. - -A ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template. -
- -
-What happens when a replica dies?
-
- -#### Kubernetes Secrets +#### Kubernetes - Secrets
Explain Kubernetes Secrets
@@ -6761,7 +6943,7 @@ USER_PASSWORD environment variable will store the value from password key in the In other words, you reference a value from a Kubernetes Secret.
-#### Kubernetes Storage +#### Kubernetes - Volumes
True or False? Kubernetes provides data persistence out of the box, so when you restart a pod, data is saved
@@ -6794,6 +6976,10 @@ True What is PersistentVolumeClaim?
+
+Explain Volume Snapshots
+
+
True or False? Kubernetes manages data persistence
@@ -6812,6 +6998,18 @@ False Explain Access Modes
+
+What is CSI Volume Cloning?
+
+ +
+Explain "Ephemeral Volumes"
+
+ +
+What types of ephemeral volumes Kubernetes supports?
+
+
What is Reclaim Policy?
@@ -6866,7 +7064,7 @@ The pod is automatically assigned with the default service account (in the names `kubectl get serviceaccounts`
-#### Kubernetes Misc +#### Kubernetes - Misc
Explain what Kubernetes Service Discovery means
@@ -6920,20 +7118,6 @@ Scale the number of pods automatically on observed CPU utilization. When you delete a pod, is it deleted instantly? (a moment after running the command)
-
-How to delete a pod instantly?
- -Use "--grace-period=0 --force" -
- -
-Explain Liveness probe
-
- -
-Explain Readiness probe
-
-
What does being cloud-native mean?
@@ -9196,6 +9380,7 @@ Alert manager is responsible for alerts ;) |--------|--------|------|----|----| | My first Commit | Commit | [Exercise](exercises/git/commit_01.md) | [Solution](exercises/git/solutions/commit_01_solution.md) | | | Time to Branch | Branch | [Exercise](exercises/git/branch_01.md) | [Solution](exercises/git/solutions/branch_01_solution.md) | | +| Squashing Commits | Commit | [Exercise](exercises/git/squashing_commits.md) | [Solution](exercises/git/solutions/squashing_commits.md) | |
How do you know if a certain directory is a git repository?
@@ -9781,6 +9966,12 @@ In simpler words, think about it as an isolated environment for users to manage `oc get projects` will list all projects. The "STATUS" column can be used to see which projects are currently active.
+
+You have a new team member and you would like to assign to him the "admin" role on your project in OpenShift. How to achieve that?
+ +`oc adm policy add-role-to-user -n ` +
+ ## OpenShift - Images
@@ -11017,13 +11208,13 @@ Access control based on user roles (i.e., a collection of access authorizations
-## Security - Web +#### Security - Web
What is Nonce?
-## Security - SSH +#### Security - SSH
What is SSH how does it work?
@@ -11039,7 +11230,7 @@ Access control based on user roles (i.e., a collection of access authorizations What is the role of an SSH key?
-## Security Cryptography +#### Security - Cryptography
Explain Symmetrical encryption
diff --git a/exercises/aws/hello_function.md b/exercises/aws/hello_function.md new file mode 100644 index 0000000..0c15b97 --- /dev/null +++ b/exercises/aws/hello_function.md @@ -0,0 +1,3 @@ +## Hello Function + +Create a basic AWS Lambda function that when given a name, will return "Hello " diff --git a/exercises/aws/solutions/hello_function.md b/exercises/aws/solutions/hello_function.md new file mode 100644 index 0000000..e14ab4b --- /dev/null +++ b/exercises/aws/solutions/hello_function.md @@ -0,0 +1,49 @@ +## Hello Function - Solution + +### Exercise + +Create a basic AWS Lambda function that when given a name, will return "Hello " + +### Solution + +#### Define a function + +1. Go to Lambda console panel and click on `Create function` + 1. Give the function a name like `BasicFunction` + 2. Select `Python3` runtime + 3. Now to handle function's permissions, we can attach IAM role to our function either by setting a role or creating a new role. I selected "Create a new role from AWS policy templates" + 4. In "Policy Templates" select "Simple Microservice Permissions" + +1. Next, you should see a text editor where you will insert a code similar to the following + +#### Function's code +``` +import json + + +def lambda_handler(event, context): + firstName = event['name'] + return 'Hello ' + firstName +``` +2. Click on "Create Function" + +#### Define a test + +1. Now let's test the function. Click on "Test". +2. Select "Create new test event" +3. Set the "Event name" to whatever you'd like. For example "TestEvent" +4. Provide keys to test + +``` +{ + "name": 'Spyro' +} +``` +5. Click on "Create" + +#### Test the function + +1. Choose the test event you've create (`TestEvent`) +2. Click on the `Test` button +3. You should see something similar to `Execution result: succeeded` +4. If you'll go to AWS CloudWatch, you should see a related log stream diff --git a/exercises/aws/solutions/url_function.md b/exercises/aws/solutions/url_function.md new file mode 100644 index 0000000..fcbe991 --- /dev/null +++ b/exercises/aws/solutions/url_function.md @@ -0,0 +1,71 @@ +## URL Function + +Create a basic AWS Lambda function that will be triggered when you enter a URL in the browser + +### Solution + +#### Define a function + +1. Go to Lambda console panel and click on `Create function` + 1. Give the function a name like `urlFunction` + 2. Select `Python3` runtime + 3. Now to handle function's permissions, we can attach IAM role to our function either by setting a role or creating a new role. I selected "Create a new role from AWS policy templates" + 4. In "Policy Templates" select "Simple Microservice Permissions" + +1. Next, you should see a text editor where you will insert a code similar to the following + +#### Function's code +``` +import json + + +def lambda_handler(event, context): + firstName = event['name'] + return 'Hello ' + firstName +``` +2. Click on "Create Function" + +#### Define a test + +1. Now let's test the function. Click on "Test". +2. Select "Create new test event" +3. Set the "Event name" to whatever you'd like. For example "TestEvent" +4. Provide keys to test + +``` +{ + "name": 'Spyro' +} +``` +5. Click on "Create" + +#### Test the function + +1. Choose the test event you've create (`TestEvent`) +2. Click on the `Test` button +3. You should see something similar to `Execution result: succeeded` +4. If you'll go to AWS CloudWatch, you should see a related log stream + +#### Define a trigger + +We'll define a trigger in order to trigger the function when inserting the URL in the browser + +1. Go to "API Gateway console" and click on "New API Option" +2. Insert the API name, description and click on "Create" +3. Click on Action -> Create Resource +4. Insert resource name and path (e.g. the path can be /hello) and click on "Create Resource" +5. Select the resource we've created and click on "Create Method" +6. For "integration type" choose "Lambda Function" and insert the lambda function name we've given to the function we previously created. Make sure to also use the same region +7. Confirm settings and any required permissions +8. Now click again on the resource and modify "Body Mapping Templates" so the template includes this: + +``` +{ "name": "$input.params('name')" } +``` +9. Finally save and click on Actions -> Deploy API + +#### Running the function + +1. In the API Gateway console, in stages menu, select the API we've created and click on the GET option +2. You'll see an invoke URL you can click on. You might have to modify it to include the input so it looks similar to this: `.../hello?name=mario` +3. You should see in your browser `Hello Mario` diff --git a/exercises/aws/url_function.md b/exercises/aws/url_function.md new file mode 100644 index 0000000..8fcf590 --- /dev/null +++ b/exercises/aws/url_function.md @@ -0,0 +1,3 @@ +## URL Function + +Create a basic AWS Lambda function that will be triggered when you enter a URL in the browser diff --git a/exercises/git/solutions/squashing_commits.md b/exercises/git/solutions/squashing_commits.md new file mode 100644 index 0000000..3876196 --- /dev/null +++ b/exercises/git/solutions/squashing_commits.md @@ -0,0 +1,49 @@ +## Git - Squashing Commits - Solution + + +1. In a git repository, create a new file with the content "Mario" and commit the change + +``` +git add new_file +echo "Mario" -> new_file +git commit -a -m "New file" +``` + +2. Make change to the content of the file you just created so the content is "Mario & Luigi" and create another commit + +``` +echo "Mario & Luigi" > new_file +git commit -a -m "Added Luigi" +``` + +3. Verify you have two separate commits - `git log` + +4. Squash the two commits you've created into one commit + +``` +git rebase -i HEAD~2 +``` + +You should see something similar to: + +``` +pick 5412076 New file +pick 4016808 Added Luigi +``` + +Change `pick` to `squash` + + +``` +pick 5412076 New file +squash 4016808 Added Luigi +``` + +Save it and provide a commit message for the squashed commit + +### After you complete the exercise + +Answer the following: + +* What is the reason for squashing commits? - history becomes cleaner and it's easier to track changes without commit like "removed a character" for example. +* Is it possible to squash more than 2 commits? - yes diff --git a/exercises/git/squashing_commits.md b/exercises/git/squashing_commits.md new file mode 100644 index 0000000..6325008 --- /dev/null +++ b/exercises/git/squashing_commits.md @@ -0,0 +1,19 @@ +## Git - Squashing Commits + +### Objective + +Learn how to squash commits + +### Instructions + +1. In a git repository, create a new file with the content "Mario" and create a new commit +2. Make change to the content of the file you just created so the content is "Mario & Luigi" and create another commit +3. Verify you have two separate commits +4. Squash the latest two commits into one commit + +### After you complete the exercise + +Answer the following: + +* What is the reason for squashing commits? +* Is it possible to squash more than 2 commits? diff --git a/exercises/kubernetes/killing_containers.md b/exercises/kubernetes/killing_containers.md new file mode 100644 index 0000000..f703bee --- /dev/null +++ b/exercises/kubernetes/killing_containers.md @@ -0,0 +1,12 @@ +## "Killing" Containers + +1. Run Pod with a web service (e.g. httpd) +2. Verify the web service is running with the `ps` command +3. Check how many restarts the pod has performed +4. Kill the web service process +5. Check how many restarts the pod has performed +6. Verify again the web service is running + +## After you complete the exercise + +* Why did the "RESTARTS" count raised? diff --git a/exercises/kubernetes/solutions/killing_containers.md b/exercises/kubernetes/solutions/killing_containers.md new file mode 100644 index 0000000..c3c1378 --- /dev/null +++ b/exercises/kubernetes/solutions/killing_containers.md @@ -0,0 +1,12 @@ +## "Killing" Containers - Solution + +1. Run Pod with a web service (e.g. httpd) - `kubectl run web --image registry.redhat.io/rhscl/httpd-24-rhel7` +2. Verify the web service is running with the `ps` command - `kubectl exec web -- ps` +3. Check how many restarts the pod has performed - `kubectl get po web` +4. Kill the web service process -`kubectl exec web -- kill 1` +5. Check how many restarts the pod has performed - `kubectl get po web` +6. Verify again the web service is running - `kubectl exec web -- ps` + +## After you complete the exercise + +* Why did the "RESTARTS" count raised? - `because we killed the process and Kubernetes identified the container isn't running proprely so it performed restart to the Pod`