Add Circle CI questions
In addition to a couple of k8s questions.
This commit is contained in:
parent
64e6614680
commit
ad66a50f3a
@ -77,8 +77,10 @@
|
|||||||
<td align="center"><a href="topics/perl/README.md"><img src="images/perl.png" width="75px;" height="75px;" alt="perl"/><br /><b>Perl</b></a></td>
|
<td align="center"><a href="topics/perl/README.md"><img src="images/perl.png" width="75px;" height="75px;" alt="perl"/><br /><b>Perl</b></a></td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td align="center"><a href="topics/kafka/README.md"><img src="images/logos/kafka.png" width="70px;" height="80px;" alt="Kafka"/><br /><b>Kafka</b></a></td>
|
<td align="center"><a href="topics/circleci/README.md"><img src="images/logos/circleci.png" width="70px;" height="70px;" alt="Circle CI"/><br /><b>Circle CI</b></a></td>
|
||||||
<td align="center"><a href="topics/argo/README.md"><img src="images/logos/argo.png" width="80px;" height="80px;" alt="Argo"/><br /><b>Argo</b></a></td>
|
<td align="center"><a href="topics/argo/README.md"><img src="images/logos/argo.png" width="80px;" height="80px;" alt="Argo"/><br /><b>Argo</b></a></td>
|
||||||
|
<td align="center"><a href="topics/kafka/README.md"><img src="images/logos/kafka.png" width="70px;" height="80px;" alt="Kafka"/><br /><b>Kafka</b></a></td>
|
||||||
|
|
||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
</center>
|
</center>
|
||||||
|
BIN
images/logos/circleci.png
Normal file
BIN
images/logos/circleci.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 2.6 KiB |
44
topics/circleci/README.md
Normal file
44
topics/circleci/README.md
Normal file
@ -0,0 +1,44 @@
|
|||||||
|
# Circle CI
|
||||||
|
|
||||||
|
## Circle CI Questions
|
||||||
|
|
||||||
|
### Circle CI 101
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>What is Circle CI?</summary><br><b>
|
||||||
|
|
||||||
|
[Circle CI](https://circleci.com): "CircleCI is a continuous integration and continuous delivery platform that can be used to implement DevOps practices."
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>What are some benefits of Circle CI?</summary><br><b>
|
||||||
|
|
||||||
|
[Circle CI Docs](https://circleci.com/docs/about-circleci): "SSH into any job to debug your build issues.
|
||||||
|
Set up parallelism in your .circleci/config.yml file to run jobs faster.
|
||||||
|
Configure caching with two simple keys to reuse data from previous jobs in your workflow.
|
||||||
|
Configure self-hosted runners for unique platform support.
|
||||||
|
Access Arm resources for the machine executor.
|
||||||
|
Use orbs, reusable packages of configuration, to integrate with third parties.
|
||||||
|
Use pre-built Docker images in a variety of languages.
|
||||||
|
Use the API
|
||||||
|
to retrieve information about jobs and workflows.
|
||||||
|
Use the CLI to access advanced tools locally.
|
||||||
|
Get flaky test detection with test insights."
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>What is an Orb?</summary><br><b>
|
||||||
|
|
||||||
|
[Circle CI Docs](https://circleci.com/developer/orbs): "Orbs are shareable packages of CircleCI configuration you can use to simplify your builds"
|
||||||
|
|
||||||
|
They can come from the public registry or defined privately as part of an organization.
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
### Circle CI Hands-On 101
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Where (in what location in the project) Circle CI pipelines are defined?</summary><br><b>
|
||||||
|
|
||||||
|
`.circleci/config.yml`
|
||||||
|
</b></details>
|
@ -12,7 +12,9 @@
|
|||||||
- [Deployments](#deployments)
|
- [Deployments](#deployments)
|
||||||
- [Troubleshooting Deployments](#troubleshooting-deployments)
|
- [Troubleshooting Deployments](#troubleshooting-deployments)
|
||||||
- [Scheduler](#scheduler)
|
- [Scheduler](#scheduler)
|
||||||
|
- [Node Affinity](#node-affinity)
|
||||||
- [Labels and Selectors](#labels-and-selectors)
|
- [Labels and Selectors](#labels-and-selectors)
|
||||||
|
- [Node Selector](#node-selector)
|
||||||
- [Taints](#taints)
|
- [Taints](#taints)
|
||||||
|
|
||||||
## Setup
|
## Setup
|
||||||
@ -537,6 +539,45 @@ spec:
|
|||||||
Note: if you don't have a node1 in your cluster the Pod will be stuck on "Pending" state.
|
Note: if you don't have a node1 in your cluster the Pod will be stuck on "Pending" state.
|
||||||
</b></details>
|
</b></details>
|
||||||
|
|
||||||
|
### Node Affinity
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Using node affinity, set a Pod to schedule on a node where the key is "region" and value is either "asia" or "emea"</summary><br><b>
|
||||||
|
|
||||||
|
`vi pod.yaml`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
affinity:
|
||||||
|
nodeAffinity:
|
||||||
|
requiredDuringSchedlingIgnoredDuringExecution:
|
||||||
|
nodeSelectorTerms:
|
||||||
|
- matchExpressions:
|
||||||
|
- key: region
|
||||||
|
operator: In
|
||||||
|
values:
|
||||||
|
- asia
|
||||||
|
- emea
|
||||||
|
```
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Using node affinity, set a Pod to never schedule on a node where the key is "region" and value is "neverland"</summary><br><b>
|
||||||
|
|
||||||
|
`vi pod.yaml`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
affinity:
|
||||||
|
nodeAffinity:
|
||||||
|
requiredDuringSchedlingIgnoredDuringExecution:
|
||||||
|
nodeSelectorTerms:
|
||||||
|
- matchExpressions:
|
||||||
|
- key: region
|
||||||
|
operator: NotIn
|
||||||
|
values:
|
||||||
|
- neverland
|
||||||
|
```
|
||||||
|
</b></details>
|
||||||
|
|
||||||
## Labels and Selectors
|
## Labels and Selectors
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
@ -557,6 +598,38 @@ Note: if you don't have a node1 in your cluster the Pod will be stuck on "Pendin
|
|||||||
`k get deploy -l env=prod,type=web`
|
`k get deploy -l env=prod,type=web`
|
||||||
</b></details>
|
</b></details>
|
||||||
|
|
||||||
|
### Node Selector
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Apply the label "hw=max" on one of the nodes in your cluster</summary><br><b>
|
||||||
|
|
||||||
|
`kubectl label nodes some-node hw=max`
|
||||||
|
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>reate and run a Pod called `some-pod` with the image `redis` and configure it to use the selector `hw=max`</summary><br><b>
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl run some-pod --image=redis --dry-run=client -o yaml > pod.yaml
|
||||||
|
|
||||||
|
vi pod.yaml
|
||||||
|
|
||||||
|
spec:
|
||||||
|
nodeSelector:
|
||||||
|
hw: max
|
||||||
|
|
||||||
|
kubectl apply -f pod.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Explain why node selectors might be limited</summary><br><b>
|
||||||
|
|
||||||
|
Assume you would like to run your Pod on all the nodes with with either `hw` set to max or to min, instead of just max. This is not possible with nodeSelectors which are quite simplified and this is where you might want to consider `node affinity`.
|
||||||
|
</b></details>
|
||||||
|
|
||||||
## Taints
|
## Taints
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
@ -566,7 +639,36 @@ Note: if you don't have a node1 in your cluster the Pod will be stuck on "Pendin
|
|||||||
</b></details>
|
</b></details>
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary>Create a taint on one of the nodes in your cluster with key of "app" and value of "web" and effect of "NoSchedule"</summary><br><b>
|
<summary>Create a taint on one of the nodes in your cluster with key of "app" and value of "web" and effect of "NoSchedule". Verify it was applied</summary><br><b>
|
||||||
|
|
||||||
`k taint node minikube app=web:NoSchedule`
|
`k taint node minikube app=web:NoSchedule`
|
||||||
|
|
||||||
|
`k describe no minikube | grep -i taints`
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>You applied a taint with <code>k taint node minikube app=web:NoSchedule</code> on the only node in your cluster and then executed <code>kubectl run some-pod --image=redis</code>. What will happen?</summary><br><b>
|
||||||
|
|
||||||
|
The Pod will remain in "Pending" status due to the only node in the cluster having a taint of "app=web".
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>You applied a taint with <code>k taint node minikube app=web:NoSchedule</code> on the only node in your cluster and then executed <code>kubectl run some-pod --image=redis</code> but the Pod is in pending state. How to fix it?</summary><br><b>
|
||||||
|
|
||||||
|
`kubectl edit po some-pod` and add the following
|
||||||
|
|
||||||
|
```
|
||||||
|
- effect: NoSchedule
|
||||||
|
key: app
|
||||||
|
operator: Equal
|
||||||
|
value: web
|
||||||
|
```
|
||||||
|
|
||||||
|
Exit and save. The pod should be in Running state now.
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Remove an existing taint from one of the nodes in your cluster</summary><br><b>
|
||||||
|
|
||||||
|
`k taint node minikube app=web:NoSchedule-`
|
||||||
</b></details>
|
</b></details>
|
@ -48,6 +48,7 @@ What's your goal?
|
|||||||
- [Istio](#istio)
|
- [Istio](#istio)
|
||||||
- [Controllers](#controllers)
|
- [Controllers](#controllers)
|
||||||
- [Scheduler](#scheduler-1)
|
- [Scheduler](#scheduler-1)
|
||||||
|
- [Node Affinity](#node-affinity)
|
||||||
- [Taints](#taints)
|
- [Taints](#taints)
|
||||||
- [Scenarios](#scenarios)
|
- [Scenarios](#scenarios)
|
||||||
|
|
||||||
@ -79,6 +80,8 @@ What's your goal?
|
|||||||
|Name|Topic|Objective & Instructions|Solution|Comments|
|
|Name|Topic|Objective & Instructions|Solution|Comments|
|
||||||
|--------|--------|------|----|----|
|
|--------|--------|------|----|----|
|
||||||
| Labels and Selectors 101 | Labels, Selectors | [Exercise](exercises/labels_and_selectors/exercise.md) | [Solution](exercises/labels_and_selectors/solution.md)
|
| Labels and Selectors 101 | Labels, Selectors | [Exercise](exercises/labels_and_selectors/exercise.md) | [Solution](exercises/labels_and_selectors/solution.md)
|
||||||
|
| Node Selectors | Labels, Selectors | [Exercise](exercises/node_selectors/exercise.md) | [Solution](exercises/node_selectors/solution.md)
|
||||||
|
|
||||||
|
|
||||||
### Scheduler
|
### Scheduler
|
||||||
|
|
||||||
@ -2475,6 +2478,57 @@ spec:
|
|||||||
Note: if you don't have a node1 in your cluster the Pod will be stuck on "Pending" state.
|
Note: if you don't have a node1 in your cluster the Pod will be stuck on "Pending" state.
|
||||||
</b></details>
|
</b></details>
|
||||||
|
|
||||||
|
#### Node Affinity
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Using node affinity, set a Pod to schedule on a node where the key is "region" and value is either "asia" or "emea"</summary><br><b>
|
||||||
|
|
||||||
|
`vi pod.yaml`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
affinity:
|
||||||
|
nodeAffinity:
|
||||||
|
requiredDuringSchedlingIgnoredDuringExecution:
|
||||||
|
nodeSelectorTerms:
|
||||||
|
- matchExpressions:
|
||||||
|
- key: region
|
||||||
|
operator: In
|
||||||
|
values:
|
||||||
|
- asia
|
||||||
|
- emea
|
||||||
|
```
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Using node affinity, set a Pod to never schedule on a node where the key is "region" and value is "neverland"</summary><br><b>
|
||||||
|
|
||||||
|
`vi pod.yaml`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
affinity:
|
||||||
|
nodeAffinity:
|
||||||
|
requiredDuringSchedlingIgnoredDuringExecution:
|
||||||
|
nodeSelectorTerms:
|
||||||
|
- matchExpressions:
|
||||||
|
- key: region
|
||||||
|
operator: NotIn
|
||||||
|
values:
|
||||||
|
- neverland
|
||||||
|
```
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>True of False? Using the node affinity type "requiredDuringSchedlingIgnoredDuringExecution" means the scheduler can't schedule unless the rule is met</summary><br><b>
|
||||||
|
|
||||||
|
True
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>True of False? Using the node affinity type "preferredDuringSchedlingIgnoredDuringExecution" means the scheduler can't schedule unless the rule is met</summary><br><b>
|
||||||
|
|
||||||
|
False. The scheduler tries to find a node that meets the requirements/rules and if it doesn't it will schedule the Pod anyway.
|
||||||
|
</b></details>
|
||||||
|
|
||||||
## Taints
|
## Taints
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
@ -2484,9 +2538,38 @@ Note: if you don't have a node1 in your cluster the Pod will be stuck on "Pendin
|
|||||||
</b></details>
|
</b></details>
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary>Create a taint on one of the nodes in your cluster with key of "app" and value of "web" and effect of "NoSchedule"</summary><br><b>
|
<summary>Create a taint on one of the nodes in your cluster with key of "app" and value of "web" and effect of "NoSchedule". Verify it was applied</summary><br><b>
|
||||||
|
|
||||||
`k taint node minikube app=web:NoSchedule`
|
`k taint node minikube app=web:NoSchedule`
|
||||||
|
|
||||||
|
`k describe no minikube | grep -i taints`
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>You applied a taint with <code>k taint node minikube app=web:NoSchedule</code> on the only node in your cluster and then executed <code>kubectl run some-pod --image=redis</code>. What will happen?</summary><br><b>
|
||||||
|
|
||||||
|
The Pod will remain in "Pending" status due to the only node in the cluster having a taint of "app=web".
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>You applied a taint with <code>k taint node minikube app=web:NoSchedule</code> on the only node in your cluster and then executed <code>kubectl run some-pod --image=redis</code> but the Pod is in pending state. How to fix it?</summary><br><b>
|
||||||
|
|
||||||
|
`kubectl edit po some-pod` and add the following
|
||||||
|
|
||||||
|
```
|
||||||
|
- effect: NoSchedule
|
||||||
|
key: app
|
||||||
|
operator: Equal
|
||||||
|
value: web
|
||||||
|
```
|
||||||
|
|
||||||
|
Exit and save. The pod should be in Running state now.
|
||||||
|
</b></details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Remove an existing taint from one of the nodes in your cluster</summary><br><b>
|
||||||
|
|
||||||
|
`k taint node minikube app=web:NoSchedule-`
|
||||||
</b></details>
|
</b></details>
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
|
12
topics/kubernetes/exercises/node_selectors/exercise.md
Normal file
12
topics/kubernetes/exercises/node_selectors/exercise.md
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
# Node Selectors
|
||||||
|
|
||||||
|
## Objectives
|
||||||
|
|
||||||
|
1. Apply the label "hw=max" on one of the nodes in your cluster
|
||||||
|
2. Create and run a Pod called `some-pod` with the image `redis` and configure it to use the selector `hw=max`
|
||||||
|
3. Explain why node selectors might be limited
|
||||||
|
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
Click [here](solution.md) to view the solution
|
29
topics/kubernetes/exercises/node_selectors/solution.md
Normal file
29
topics/kubernetes/exercises/node_selectors/solution.md
Normal file
@ -0,0 +1,29 @@
|
|||||||
|
# Node Selectors
|
||||||
|
|
||||||
|
## Objectives
|
||||||
|
|
||||||
|
1. Apply the label "hw=max" on one of the nodes in your cluster
|
||||||
|
2. Create and run a Pod called `some-pod` with the image `redis` and configure it to use the selector `hw=max`
|
||||||
|
3. Explain why node selectors might be limited
|
||||||
|
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
Click [here](solution.md) to view the solution
|
||||||
|
|
||||||
|
1. `kubectl label nodes some-node hw=max`
|
||||||
|
2.
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl run some-pod --image=redis --dry-run=client -o yaml > pod.yaml
|
||||||
|
|
||||||
|
vi pod.yaml
|
||||||
|
|
||||||
|
spec:
|
||||||
|
nodeSelector:
|
||||||
|
hw: max
|
||||||
|
|
||||||
|
kubectl apply -f pod.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Assume you would like to run your Pod on all the nodes with with either `hw` set to max or to min, instead of just max. This is not possible with nodeSelectors which are quite simplified and this is where you might want to consider `node affinity`.
|
@ -6,12 +6,8 @@
|
|||||||
2. Create a taint on one of the nodes in your cluster with key of "app" and value of "web" and effect of "NoSchedule"
|
2. Create a taint on one of the nodes in your cluster with key of "app" and value of "web" and effect of "NoSchedule"
|
||||||
1. Explain what it does exactly
|
1. Explain what it does exactly
|
||||||
2. Verify it was applied
|
2. Verify it was applied
|
||||||
|
3. Run a Pod that will be able to run on the node on which you applied the taint
|
||||||
|
|
||||||
## Solution
|
## Solution
|
||||||
|
|
||||||
Click [here](solution.md) to view the solution.
|
Click [here](solution.md) to view the solution.
|
||||||
|
|
||||||
1. `kubectl describe no minikube | grep -i taints`
|
|
||||||
2. `kubectl taint node minikube app=web:NoSchedule`
|
|
||||||
1. Any resource with "app=web" key value will not be scheduled on node `minikube`
|
|
||||||
2. `kubectl describe no minikube | grep -i taints`
|
|
30
topics/kubernetes/exercises/taints_101/solution.md
Normal file
30
topics/kubernetes/exercises/taints_101/solution.md
Normal file
@ -0,0 +1,30 @@
|
|||||||
|
# Taints 101
|
||||||
|
|
||||||
|
## Objectives
|
||||||
|
|
||||||
|
1. Check if one of the nodes in the cluster has taints (doesn't matter which node)
|
||||||
|
2. Create a taint on one of the nodes in your cluster with key of "app" and value of "web" and effect of "NoSchedule"
|
||||||
|
1. Explain what it does exactly
|
||||||
|
2. Verify it was applied
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
1. `kubectl describe no minikube | grep -i taints`
|
||||||
|
2. `kubectl taint node minikube app=web:NoSchedule`
|
||||||
|
1. Any resource with "app=web" key value will not be scheduled on node `minikube`
|
||||||
|
2. `kubectl describe no minikube | grep -i taints`
|
||||||
|
3.
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl run some-pod --image=redis
|
||||||
|
kubectl edit po some-pod
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
- effect: NoSchedule
|
||||||
|
key: app
|
||||||
|
operator: Equal
|
||||||
|
value: web
|
||||||
|
```
|
||||||
|
|
||||||
|
Save and exit. The Pod should be running.
|
Loading…
Reference in New Issue
Block a user