Add different solutions to AWS exercises
Not only console solutions, but also Terraform and Pulumi. In addition, this change fixes issues #279 and #280
This commit is contained in:
parent
591ef7495b
commit
03a92d5bea
@ -2,7 +2,7 @@
|
||||
|
||||
:information_source: This repo contains questions and exercises on various technical topics, sometimes related to DevOps and SRE
|
||||
|
||||
:bar_chart: There are currently **2393** exercises and questions
|
||||
:bar_chart: There are currently **2402** exercises and questions
|
||||
|
||||
:books: To learn more about DevOps and SRE, check the resources in [devops-resources](https://github.com/bregman-arie/devops-resources) repository
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
# AWS
|
||||
|
||||
**Note**: Provided solutions are using the AWS console. It's recommended you'll use IaC technologies to solve the exercises (e.g. Terraform, Pulumi).<br>
|
||||
**2nd Note**: Some of the exercises cost money and can't be performed using the free tier/resources
|
||||
**2nd Note**: Some of the exercises cost $$$ and can't be performed using the free tier/resources
|
||||
|
||||
- [AWS](#aws)
|
||||
- [Exercises](#exercises)
|
||||
@ -15,6 +15,7 @@
|
||||
- [Containers](#containers)
|
||||
- [Lambda](#lambda)
|
||||
- [Elastic Beanstalk](#elastic-beanstalk)
|
||||
- [CodePipeline](#codepipeline)
|
||||
- [Misc](#misc)
|
||||
- [Questions](#questions)
|
||||
- [Global Infrastructure](#global-infrastructure)
|
||||
@ -39,6 +40,7 @@
|
||||
- [Disaster Recovery](#disaster-recovery)
|
||||
- [CloudFront](#cloudfront)
|
||||
- [ELB](#elb-1)
|
||||
- [ALB](#alb)
|
||||
- [Auto Scaling Group](#auto-scaling-group)
|
||||
- [Security](#security-1)
|
||||
- [Databases](#databases-1)
|
||||
@ -58,6 +60,7 @@
|
||||
- [Production Operations and Migrations](#production-operations-and-migrations)
|
||||
- [Scenarios](#scenarios)
|
||||
- [Architecture Design](#architecture-design)
|
||||
- [Misc](#misc-2)
|
||||
|
||||
## Exercises
|
||||
|
||||
@ -1395,15 +1398,17 @@ True. AWS responsible for making sure ELB is operational and takes care of lifec
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Which load balancer would you use for services which use HTTP or HTTPS traffic?</summary><br><b>
|
||||
|
||||
Application Load Balancer (ALB).
|
||||
<summary>What's a "listener" in regards to ELB?</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>True or False? With ALB (Application Load Balancer) it's possible to do routing based on query string and/or headers</summary><br><b>
|
||||
<summary>What's a "target group" in regards to ELB?</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
True.
|
||||
<details>
|
||||
<summary>Which load balancer would you use for services which use HTTP or HTTPS traffic?</summary><br><b>
|
||||
|
||||
Application Load Balancer (ALB).
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -1440,7 +1445,7 @@ For example, port `2017` and endpoint `/health`.
|
||||
|
||||
<details>
|
||||
<summary>Which type of AWS load balancer is used in the following drawing?<br>
|
||||
<img src="images/aws/identify_load_balancer.png" width="300px;" height="400px;"/>
|
||||
<img src="../../images/aws/identify_load_balancer.png"/>
|
||||
</summary><br><b>
|
||||
|
||||
Application Load Balancer (routing based on different endpoints + HTTP is used).
|
||||
@ -1525,12 +1530,6 @@ False. This is only supported in Classic Load Balancer and Application Load Bala
|
||||
With cross zone load balancing, traffic distributed evenly across all (registered) instances in all the availability zones.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>True or False? For application load balancer, cross zone load balancing is always on and can't be disabled</summary><br><b>
|
||||
|
||||
True
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>True or False? For network load balancer, cross zone load balancing is always on and can't be disabled </summary><br><b>
|
||||
|
||||
@ -1540,7 +1539,7 @@ False. It's disabled by default
|
||||
<details>
|
||||
<summary>True or False? In regards to cross zone load balancing, AWS charges you for inter AZ data in network load balancer but no in application load balancer</summary><br><b>
|
||||
|
||||
False. It charges fir inter AZ data in network load balancer, but not in application load balancer
|
||||
False. It charges for inter AZ data in network load balancer, but not in application load balancer
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -1555,6 +1554,20 @@ True
|
||||
The period of time or process of "draining" instances from requests/traffic (basically let it complete all active connections but don't start new ones) so it can be de-registered eventually and ELB won't send requests/traffic to it anymore.
|
||||
</b></details>
|
||||
|
||||
#### ALB
|
||||
|
||||
<details>
|
||||
<summary>True or False? With ALB (Application Load Balancer) it's possible to do routing based on query string and/or headers</summary><br><b>
|
||||
|
||||
True.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>True or False? For application load balancer, cross zone load balancing is always on and can't be disabled</summary><br><b>
|
||||
|
||||
True
|
||||
</b></details>
|
||||
|
||||
### Auto Scaling Group
|
||||
|
||||
<details>
|
||||
@ -3157,3 +3170,12 @@ Network Load Balancer
|
||||
|
||||
You can use an ElastiCache cluster or RDS Read Replicas.
|
||||
</b></details>
|
||||
|
||||
### Misc
|
||||
|
||||
<details>
|
||||
<summary>What's an ARN?</summary><br><b>
|
||||
|
||||
ARN (Amazon Resources Names) used for uniquely identifying different AWS resources.
|
||||
It is used when you would like to identify resource uniqely across all AWS infra.
|
||||
</b></details>
|
@ -1,6 +1,11 @@
|
||||
## AWS VPC - My First VPC
|
||||
# My First VPC
|
||||
|
||||
### Objectives
|
||||
## Objectives
|
||||
|
||||
1. Create a new VPC
|
||||
1. It should have a CIDR that supports using at least 60,000 hosts
|
||||
1. It should have a CIDR that supports using at least 60,000 hosts
|
||||
2. It should be named "exercise-vpc"
|
||||
|
||||
## Solution
|
||||
|
||||
Click [here](solution.md) to view the solution
|
0
topics/aws/exercises/new_vpc/main.tf
Normal file
0
topics/aws/exercises/new_vpc/main.tf
Normal file
10
topics/aws/exercises/new_vpc/pulumi/__main__.py
Normal file
10
topics/aws/exercises/new_vpc/pulumi/__main__.py
Normal file
@ -0,0 +1,10 @@
|
||||
import pulumi
|
||||
import pulumi_awsx as awsx
|
||||
|
||||
vpc = awsx.ec2.Vpc("exercise-vpc", cidr_block="10.0.0.0/16")
|
||||
|
||||
pulumi.export("vpc_id", vpc.vpc_id)
|
||||
pulumi.export("publicSubnetIds", vpc.public_subnet_ids)
|
||||
pulumi.export("privateSubnetIds", vpc.private_subnet_ids)
|
||||
|
||||
# Run 'pulumi up' to create it
|
@ -1,17 +1,30 @@
|
||||
## AWS VPC - My First VPC
|
||||
# My First VPC
|
||||
|
||||
### Objectives
|
||||
## Objectives
|
||||
|
||||
1. Create a new VPC
|
||||
1. It should have a CIDR that supports using at least 60,000 hosts
|
||||
1. It should have a CIDR that supports using at least 60,000 hosts
|
||||
2. It should be named "exercise-vpc"
|
||||
|
||||
### Solution
|
||||
## Solution
|
||||
|
||||
#### Console
|
||||
### Console
|
||||
|
||||
1. Under "Virtual Private Cloud" click on "Your VPCs"
|
||||
2. Click on "Create VPC"
|
||||
3. Insert a name (e.g. someVPC)
|
||||
3. Insert a name - "exercise-vpc"
|
||||
4. Insert IPv4 CIDR block: 10.0.0.0/16
|
||||
5. Keep "Tenancy" at Default
|
||||
6. Click on "Create VPC"
|
||||
|
||||
### Terraform
|
||||
|
||||
Click [here](terraform/main.tf) to view the solution
|
||||
|
||||
### Pulumi - Python
|
||||
|
||||
Click [here](pulumi/__main__.py) to view the solution
|
||||
|
||||
### Verify Solution
|
||||
|
||||
To verify you've create the VPC, you can run: `aws ec2 describe-vpcs -filters Name=tag:Name,Values=exercise-vpc`
|
11
topics/aws/exercises/new_vpc/terraform/main.tf
Normal file
11
topics/aws/exercises/new_vpc/terraform/main.tf
Normal file
@ -0,0 +1,11 @@
|
||||
resource "aws_vpc" "exercise-vpc" {
|
||||
cidr_block = "10.0.0.0/16"
|
||||
|
||||
tags = {
|
||||
Name = "exercise-vpc"
|
||||
}
|
||||
}
|
||||
|
||||
output "vpc-id" {
|
||||
value = aws_vpc.exercise-vpc.id
|
||||
}
|
@ -2,7 +2,8 @@
|
||||
|
||||
### Requirements
|
||||
|
||||
Single newly created VPC
|
||||
1. Single newly created VPC
|
||||
2. Region with more than two availability zones
|
||||
|
||||
### Objectives
|
||||
|
||||
|
27
topics/aws/exercises/subnets/pulumi/__main__.py
Normal file
27
topics/aws/exercises/subnets/pulumi/__main__.py
Normal file
@ -0,0 +1,27 @@
|
||||
import pulumi
|
||||
import pulumi_aws as aws
|
||||
|
||||
availableZones = pulumi_aws.get_availability_zones(state="available")
|
||||
|
||||
aws.ec2.Subnet("NewSubnet1",
|
||||
vpc_id=aws_vpc["main"]["id"],
|
||||
cidr_block="10.0.0.0/24",
|
||||
availability_zone=availableZones.names[0],
|
||||
tags={"Name": "NewSubnet1"}
|
||||
)
|
||||
|
||||
aws.ec2.Subnet("NewSubnet2",
|
||||
vpc_id=aws_vpc["main"]["id"],
|
||||
cidr_block="10.0.1.0/24",
|
||||
availability_zone=availableZones.names[1]
|
||||
tags={"Name": "NewSubnet2"}
|
||||
)
|
||||
|
||||
aws.ec2.Subnet("NewSubnet3",
|
||||
vpc_id=aws_vpc["main"]["id"],
|
||||
cidr_block="10.0.2.0/24",
|
||||
availability_zone=availableZones.names[2]
|
||||
tags={"Name": "NewSubnet3"}
|
||||
)
|
||||
|
||||
# Run "pulumi up"
|
@ -1,26 +1,27 @@
|
||||
## AWS VPC - Subnets
|
||||
# AWS VPC - Subnets
|
||||
|
||||
### Requirements
|
||||
## Requirements
|
||||
|
||||
Single newly created VPC
|
||||
1. Single newly created VPC
|
||||
2. Region with more than two availability zones
|
||||
|
||||
### Objectives
|
||||
## Objectives
|
||||
|
||||
1. Create a subnet in your newly created VPC
|
||||
1. CIDR: 10.0.0.0/24
|
||||
2. Name: NewSubnet1
|
||||
1. CIDR: 10.0.0.0/24
|
||||
1. Name: NewSubnet1
|
||||
2. Create additional subnet
|
||||
1. CIDR: 10.0.1.0/24
|
||||
2. Name: NewSubnet2
|
||||
3. Different AZ compared to previous subnet
|
||||
1. CIDR: 10.0.1.0/24
|
||||
2. Name: NewSubnet2
|
||||
3. Different AZ compared to previous subnet
|
||||
3. Create additional subnet
|
||||
1. CIDR: 10.0.2.0/24
|
||||
2. Name: NewSubnet3
|
||||
3. Different AZ compared to previous subnets
|
||||
4. CIDR: 10.0.2.0/24
|
||||
5. Name: NewSubnet3
|
||||
6. Different AZ compared to previous subnets
|
||||
|
||||
### Solution
|
||||
## Solution
|
||||
|
||||
#### Console
|
||||
### Console
|
||||
|
||||
1. Click on "Subnets" under "Virtual Private Cloud"
|
||||
2. Make sure you filter by your newly created VPC (to not see the subnets in all other VPCs). You can do this in the left side menu
|
||||
@ -37,3 +38,11 @@ Single newly created VPC
|
||||
13. Set the subnet name to "NewSubnet3"
|
||||
14. Choose a different AZ
|
||||
15. Set CIDR to 10.0.2.0/24
|
||||
|
||||
### Terraform
|
||||
|
||||
Click [here](terraform/main.tf) to view the solution
|
||||
|
||||
### Pulumi - Python
|
||||
|
||||
Click [here](pulumi/__main__.py) to view the solution
|
49
topics/aws/exercises/subnets/terraform/main.tf
Normal file
49
topics/aws/exercises/subnets/terraform/main.tf
Normal file
@ -0,0 +1,49 @@
|
||||
# Variables
|
||||
|
||||
variable "vpc_id" {
|
||||
type = string
|
||||
}
|
||||
|
||||
# AWS Subnets
|
||||
|
||||
resource "aws_subnet" "NewSubnet1" {
|
||||
cidr_block = "10.0.0.0/24"
|
||||
vpc_id = var.vpc_id
|
||||
availability_zone = data.aws_availability_zones.all.names[0]
|
||||
tags = {
|
||||
Purpose: exercise
|
||||
Name: "NewSubnet1"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_subnet" "NewSubnet2" {
|
||||
cidr_block = "10.0.1.0/24"
|
||||
vpc_id = var.vpc_id
|
||||
availability_zone = data.aws_availability_zones.all.names[1]
|
||||
tags = {
|
||||
Purpose: exercise
|
||||
Name: "NewSubnet2"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_subnet" "NewSubnet3" {
|
||||
cidr_block = "10.0.2.0/24"
|
||||
vpc_id = var.vpc_id
|
||||
availability_zone = data.aws_availability_zones.all.names[2]
|
||||
tags = {
|
||||
Purpose: exercise
|
||||
Name: "NewSubnet3"
|
||||
}
|
||||
}
|
||||
|
||||
# Outputs
|
||||
|
||||
output "NewSubnet1-id" {
|
||||
value = aws_subnet.NewSubnet1.id
|
||||
}
|
||||
output "NewSubnet2-id" {
|
||||
value = aws_subnet.NewSubnet2.id
|
||||
}
|
||||
output "NewSubnet3-id" {
|
||||
value = aws_subnet.NewSubnet3.id
|
||||
}
|
@ -13,12 +13,21 @@
|
||||
|
||||
## Kubernetes Questions
|
||||
|
||||
* [Kubernetes 101](#kubernetes-101)
|
||||
* [Kubernetes Hands-On Basics](#kubernetes-hands-on-basiscs)
|
||||
* [Kubernetes Cluster](#kubernetes-cluster)
|
||||
* [Kubernetes Pods](#kubernetes-pods)
|
||||
* [Kubernetes Deployments](#kubernetes-deployments)
|
||||
* [Kubernetes Services](#kubernetes-services)
|
||||
- [Kubernetes](#kubernetes)
|
||||
- [Kubernetes Exercises](#kubernetes-exercises)
|
||||
- [Kubernetes Questions](#kubernetes-questions)
|
||||
- [Kubernetes 101](#kubernetes-101)
|
||||
- [Kubernetes - Hands-On Basics](#kubernetes---hands-on-basics)
|
||||
- [Kubernetes - Cluster](#kubernetes---cluster)
|
||||
- [Pods](#pods)
|
||||
- [Deployments](#deployments)
|
||||
- [Services](#services)
|
||||
- [Ingress](#ingress)
|
||||
- [Kubernetes - Security](#kubernetes---security)
|
||||
- [Kubernetes - Troubleshooting Scenarios](#kubernetes---troubleshooting-scenarios)
|
||||
- [Kubernetes - Submariner](#kubernetes---submariner)
|
||||
- [Kubernetes - Istio](#kubernetes---istio)
|
||||
- [Kubernetes - Scenarios](#kubernetes---scenarios)
|
||||
|
||||
## Kubernetes 101
|
||||
|
||||
@ -161,9 +170,9 @@ False. A Kubernetes cluster consists of at least 1 master and can have 0 workers
|
||||
|
||||
<details>
|
||||
<summary>Place the components on the right side of the image in the right place in the drawing<br>
|
||||
<img src="images/kubernetes_components.png"/>
|
||||
<img src="images/cluster_architecture_exercise.png"/>
|
||||
</summary><br><b>
|
||||
<img src="images/kubernetes_components_solution.png"/>
|
||||
<img src="images/cluster_architecture_solution.png"/>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
|
BIN
topics/kubernetes/images/cluster_architecture_exercise.png
Normal file
BIN
topics/kubernetes/images/cluster_architecture_exercise.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 45 KiB |
BIN
topics/kubernetes/images/cluster_architecture_solution.png
Normal file
BIN
topics/kubernetes/images/cluster_architecture_solution.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 43 KiB |
Binary file not shown.
Before Width: | Height: | Size: 49 KiB |
Binary file not shown.
Before Width: | Height: | Size: 45 KiB |
@ -2,6 +2,14 @@
|
||||
|
||||
<details>
|
||||
<summary>What is DevSecOps? What its core principals?</summary><br><b>
|
||||
|
||||
A couple of quotations from chosen companies:
|
||||
|
||||
[Snyk](https://snyk.io/series/devsecops): "DevSecOps refers to the integration of security practices into a DevOps software delivery model. Its foundation is a culture where development and operations are enabled through process and tooling to take part in a shared responsibility for delivering secure software."
|
||||
|
||||
[Red Hat](https://www.redhat.com/en/topics/devops/what-is-devsecops): "DevSecOps stands for development, security, and operations. It's an approach to culture, automation, and platform design that integrates security as a shared responsibility throughout the entire IT lifecycle."
|
||||
|
||||
[Jfrog](https://jfrog.com/devops-tools/what-is-devsecops): "DevSecOps principles and practices parallel those of traditional DevOps with integrated and multidisciplinary teams, working together to enable secure continuous software delivery. The DevSecOps development lifecycle is a repetitive process that starts with a developer writing code, a build being triggered, the software package deployed to a production environment and monitored for issues identified in the runtime but includes security at each of these stages."
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
|
Loading…
Reference in New Issue
Block a user