Rename exercises dir

Name it instead "topics" so it won't be
strange if some topics included "exercises" directory.
This commit is contained in:
abregman
2022-08-02 01:51:39 +03:00
parent ea1d94d67b
commit 99c4e02ecf
235 changed files with 283 additions and 74 deletions

529
topics/ansible/README.md Normal file
View File

@@ -0,0 +1,529 @@
## Ansible
### Ansible Exercises
|Name|Topic|Objective & Instructions|Solution|Comments|
|--------|--------|------|----|----|
| My First Task | Tasks | [Exercise](my_first_task.md) | [Solution](solutions/my_first_task.md)
| Upgrade and Update Task | Tasks | [Exercise](update_upgrade_task.md) | [Solution](solutions/update_upgrade_task.md)
| My First Playbook | Playbooks | [Exercise](my_first_playbook.md) | [Solution](solutions/my_first_playbook.md)
### Ansible Self Assessment
<details>
<summary>Describe each of the following components in Ansible, including the relationship between them:
* Task
* Inventory
* Module
* Play
* Playbook
* Role
</summary><br><b>
Task a call to a specific Ansible module
Module the actual unit of code executed by Ansible on your own host or a remote host. Modules are indexed by category (database, file, network, …) and also referred to as task plugins.
Inventory An inventory file defines hosts and/or groups of hosts on which Ansible tasks executed upon. The inventory file can be in one of many formats, depending on the inventory plugins you have. The most common formats are INI and YAML.
Play One or more tasks executed on a given host(s)
Playbook One or more plays. Each play can be executed on the same or different hosts
Role Ansible roles allows you to group resources based on certain functionality/service such that they can be easily reused. In a role, you have directories for variables, defaults, files, templates, handlers, tasks, and metadata. You can then use the role by simply specifying it in your playbook.
</b></details>
<details>
<summary>How Ansible is different from other automation tools? (e.g. Chef, Puppet, etc.)</summary><br><b>
Ansible is:
* Agentless
* Minimal run requirements (Python & SSH) and simple to use
* Default mode is "push" (it supports also pull)
* Focus on simpleness and ease-of-use
</b></details>
<details>
<summary>True or False? Ansible follows the mutable infrastructure paradigm</summary><br><b>
True. In immutable infrastructure approach, you'll replace infrastructure instead of modifying it.<br>
Ansible rather follows the mutable infrastructure paradigm where it allows you to change the configuration of different components, but this approach is not perfect and has its own disadvantages like "configuration drift" where different components may reach different state for different reasons.
</b></details>
<details>
<summary>True or False? Ansible uses declarative style to describe the expected end state</summary><br><b>
False. It uses a procedural style.
</b></details>
<details>
<summary>What kind of automation you wouldn't do with Ansible and why?</summary><br><b>
While it's possible to provision resources with Ansible, some prefer to use tools that follow immutable infrastructure paradigm.
Ansible doesn't saves state by default. So a task that creates 5 instances for example, when executed again will create additional 5 instances (unless
additional check is implemented or explicit names are provided) while other tools might check if 5 instances exist. If only 4 exist (by checking the state file for example), one additional instance will be created to reach the end goal of 5 instances.
</b></details>
<details>
<summary>How do you list all modules and how can you see details on a specific module?</summary><br><br>
1. Ansible online docs
2. `ansible-doc -l` for list of modules and `ansible-doc [module_name]` for detailed information on a specific module
</b></details>
#### Ansible - Inventory
<details>
<summary>What is an inventory file and how do you define one?</summary><br><b>
An inventory file defines hosts and/or groups of hosts on which Ansible tasks executed upon.
An example of inventory file:
```
192.168.1.2
192.168.1.3
192.168.1.4
[web_servers]
190.40.2.20
190.40.2.21
190.40.2.22
```
</b></details>
<details>
<summary>What is a dynamic inventory file? When you would use one?</summary><br><br>
A dynamic inventory file tracks hosts from one or more sources like cloud providers and CMDB systems.
You should use one when using external sources and especially when the hosts in your environment are being automatically<br>
spun up and shut down, without you tracking every change in these sources.
</b></details>
#### Ansible - Variables
<details>
<summary>Modify the following task to use a variable instead of the value "zlib" and have "zlib" as the default in case the variable is not defined
```
- name: Install a package
package:
name: "zlib"
state: present
```
</summary><br><b>
```
- name: Install a package
package:
name: "{{ package_name|default('zlib') }}"
state: present
```
</b></details>
<details>
<summary>How to make the variable "use_var" optional?
```
- name: Install a package
package:
name: "zlib"
state: present
use: "{{ use_var }}"
```
</summary><br><b>
With "default(omit)"
```
- name: Install a package
package:
name: "zlib"
state: present
use: "{{ use_var|default(omit) }}"
```
</b></details>
<details>
<summary>What would be the result of the following play?</summary><br><b>
```
---
- name: Print information about my host
hosts: localhost
gather_facts: 'no'
tasks:
- name: Print hostname
debug:
msg: "It's me, {{ ansible_hostname }}"
```
When given a written code, always inspect it thoroughly. If your answer is “this will fail” then you are right. We are using a fact (ansible_hostname), which is a gathered piece of information from the host we are running on. But in this case, we disabled facts gathering (gather_facts: no) so the variable would be undefined which will result in failure.
</b></details>
<details>
<summary>When the value '2017'' will be used in this case: `{{ lookup('env', 'BEST_YEAR') | default('2017', true) }}`?</summary><br><b>
when the environment variable 'BEST_YEAR' is empty or false.
</b></details>
<details>
<summary>If the value of certain variable is 1, you would like to use the value "one", otherwise, use "two". How would you do it?</summary><br><b>
`{{ (certain_variable == 1) | ternary("one", "two") }}`
</b></details>
<details>
<summary>The value of a certain variable you use is the string "True". You would like the value to be a boolean. How would you cast it?</summary><br><b>
`{{ some_string_var | bool }}`
</b></details>
<details>
<summary>You want to run Ansible playbook only on specific minor version of your OS, how would you achieve that?</summary><br><b>
</b></details>
<details>
<summary>What the "become" directive used for in Ansible?</summary><br><b>
</b></details>
<details>
<summary>What are facts? How to see all the facts of a certain host?</summary><br><b>
</b></details>
<details>
<summary>What would be the result of running the following task? How to fix it?
```
- hosts: localhost
tasks:
- name: Install zlib
package:
name: zlib
state: present
```
</summary><br><b>
</b></details>
<details>
<summary>Which Ansible best practices are you familiar with?. Name at least three</summary><br><b>
</b></details>
<details>
<summary>Explain the directory layout of an Ansible role</summary><br><b>
</b></details>
<details>
<summary>What 'blocks' are used for in Ansible?</summary><br><b>
</b></details>
<details>
<summary>How do you handle errors in Ansible?</summary><br><b>
</b></details>
<details>
<summary>You would like to run a certain command if a task fails. How would you achieve that?</summary><br><b>
</b></details>
<details>
<summary>Write a playbook to install zlib and vim on all hosts if the file /tmp/mario exists on the system.</summary><br><b>
```
---
- hosts: all
vars:
mario_file: /tmp/mario
package_list:
- 'zlib'
- 'vim'
tasks:
- name: Check for mario file
stat:
path: "{{ mario_file }}"
register: mario_f
- name: Install zlib and vim if mario file exists
become: "yes"
package:
name: "{{ item }}"
state: present
with_items: "{{ package_list }}"
when: mario_f.stat.exists
```
</b></details>
<details>
<summary>Write a single task that verifies all the files in files_list variable exist on the host</summary><br><b>
```
- name: Ensure all files exist
assert:
that:
- item.stat.exists
loop: "{{ files_list }}"
```
</b></details>
<details>
<summary>Write a playbook to deploy the file /tmp/system_info on all hosts except for controllers group, with the following content</summary><br><b>
```
I'm <HOSTNAME> and my operating system is <OS>
```
Replace <HOSTNAME> and <OS> with the actual data for the specific host you are running on
The playbook to deploy the system_info file
```
---
- name: Deploy /tmp/system_info file
hosts: all:!controllers
tasks:
- name: Deploy /tmp/system_info
template:
src: system_info.j2
dest: /tmp/system_info
```
The content of the system_info.j2 template
```
# {{ ansible_managed }}
I'm {{ ansible_hostname }} and my operating system is {{ ansible_distribution }
```
</b></details>
<details>
<summary>The variable 'whoami' defined in the following places:
* role defaults -> whoami: mario
* extra vars (variables you pass to Ansible CLI with -e) -> whoami: toad
* host facts -> whoami: luigi
* inventory variables (doesnt matter which type) -> whoami: browser
According to variable precedence, which one will be used?</summary><br><b>
The right answer is toad.
Variable precedence is about how variables override each other when they set in different locations. If you didnt experience it so far Im sure at some point you will, which makes it a useful topic to be aware of.
In the context of our question, the order will be extra vars (always override any other variable) -> host facts -> inventory variables -> role defaults (the weakest).
Here is the order of precedence from least to greatest (the last listed variables winning prioritization):
1. command line values (eg “-u user”)
2. role defaults [[1\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id15)
3. inventory file or script group vars [[2\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id16)
4. inventory group_vars/all [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
5. playbook group_vars/all [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
6. inventory group_vars/* [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
7. playbook group_vars/* [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
8. inventory file or script host vars [[2\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id16)
9. inventory host_vars/* [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
10. playbook host_vars/* [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
11. host facts / cached set_facts [[4\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id18)
12. play vars
13. play vars_prompt
14. play vars_files
15. role vars (defined in role/vars/main.yml)
16. block vars (only for tasks in block)
17. task vars (only for the task)
18. include_vars
19. set_facts / registered vars
20. role (and include_role) params
21. include params
22. extra vars (always win precedence)
A full list can be found at [PlayBook Variables](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#ansible-variable-precedence) . Also, note there is a significant difference between Ansible 1.x and 2.x.
</b></details>
<details>
<summary>For each of the following statements determine if it's true or false:
* A module is a collection of tasks
* Its better to use shell or command instead of a specific module
* Host facts override play variables
* A role might include the following: vars, meta, and handlers
* Dynamic inventory is generated by extracting information from external sources
* Its a best practice to use indention of 2 spaces instead of 4
* notify used to trigger handlers
* This “hosts: all:!controllers” means run only on controllers group hosts</summary><br><b>
</b></details>
<details>
<summary>Explain the Difference between Forks and Serial & Throttle.</summary><br><b>
`Serial` is like running the playbook for each host in turn, waiting for completion of the complete playbook before moving on to the next host. `forks`=1 means run the first task in a play on one host before running the same task on the next host, so the first task will be run for each host before the next task is touched. Default fork is 5 in ansible.
```
[defaults]
forks = 30
```
```
- hosts: webservers
serial: 1
tasks:
- name: ...
```
Ansible also supports `throttle` This keyword limits the number of workers up to the maximum set via the forks setting or serial. This can be useful in restricting tasks that may be CPU-intensive or interact with a rate-limiting API
```
tasks:
- command: /path/to/cpu_intensive_command
throttle: 1
```
</b></details>
<details>
<summary>What is ansible-pull? How is it different from how ansible-playbook works?</summary><br><b>
</b></details>
<details>
<summary>What is Ansible Vault?</summary><br><b>
</b></details>
<details>
<summary>Demonstrate each of the following with Ansible:
* Conditionals
* Loops
</summary><br><b>
</b></details>
<details>
<summary>What are filters? Do you have experience with writing filters?</summary><br><b>
</b></details>
<details>
<summary>Write a filter to capitalize a string</summary><br><b>
```
def cap(self, string):
return string.capitalize()
```
</b></details>
<details>
<summary>You would like to run a task only if previous task changed anything. How would you achieve that?</summary><br><b>
</b></details>
<details>
<summary>What are callback plugins? What can you achieve by using callback plugins?</summary><br><b>
</b></details>
<details>
<summary>What is Ansible Collections?</summary><br><b>
</b></details>
<details>
<summary>What is the difference between `include_task` and `import_task`?</summary><br><b>
</b></details>
<details>
<summary>File '/tmp/exercise' includes the following content
```
Goku = 9001
Vegeta = 5200
Trunks = 6000
Gotenks = 32
```
With one task, switch the content to:
```
Goku = 9001
Vegeta = 250
Trunks = 40
Gotenks = 32
```
</summary><br><b>
```
- name: Change saiyans levels
lineinfile:
dest: /tmp/exercise
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
with_items:
- { regexp: '^Vegeta', line: 'Vegeta = 250' }
- { regexp: '^Trunks', line: 'Trunks = 40' }
...
```
</b></details>
#### Ansible - Execution and Strategy
<details>
<summary>True or False? By default, Ansible will execute all the tasks in play on a single host before proceeding to the next host</summary><br><b>
False. Ansible will execute a single task on all hosts before moving to the next task in a play. As for today, it uses 5 forks by default.<br>
This behavior is described as "strategy" in Ansible and it's configurable.
</b></details>
<details>
<summary>What is a "strategy" in Ansible? What is the default strategy?</summary><br><b>
A strategy in Ansible describes how Ansible will execute the different tasks on the hosts. By default Ansible is using the "Linear strategy" which defines that each task will run on all hosts before proceeding to the next task.
</b></details>
<details>
<summary>What strategies are you familiar with in Ansible?</summary><br><b>
- Linear: the default strategy in Ansible. Run each task on all hosts before proceeding.
- Free: For each host, run all the tasks until the end of the play as soon as possible
- Debug: Run tasks in an interactive way
</b></details>
<details>
<summary>What the <code>serial</code> keyword is used for?</summary><br><b>
It's used to specify the number (or percentage) of hosts to run the full play on, before moving to next number of hosts in the group.
For example:
```
- name: Some play
hosts: databases
serial: 4
```
If your group has 8 hosts. It will run the whole play on 4 hosts and then the same play on another 4 hosts.
</b></details>
#### Ansible Testing
<details>
<summary>How do you test your Ansible based projects?</summary><br><b>
</b></details>
<details>
<summary>What is Molecule? How does it works?</summary><br><b>
</b></details>
<details>
<summary>You run Ansible tests and you get "idempotence test failed". What does it mean? Why idempotence is important?</summary><br><b>
</b></details>
#### Ansible - Debugging
<details>
<summary>How to find out the data type of a certain variable in one of the playbooks?</summary><br><b>
"{{ some_var | type_debug }}"
</b></details>
#### Ansible - Collections
<details>
<summary>What are collections in Ansible?</summary><br><b>
</b></details>

View File

@@ -0,0 +1,6 @@
## Ansible - My First Playbook
1. Write a playbook that will:
a. Install the package zlib
b. Create the file `/tmp/some_file`
2. Run the playbook on a remote host

View File

@@ -0,0 +1,3 @@
## Ansible - My First Task
1. Write a task to create the directory /tmp/new_directory

View File

@@ -0,0 +1,28 @@
## My first playbook - Solution
1. `vi first_playbook.yml`
```
- name: Install zlib and create a file
hosts: some_remote_host
tasks:
- name: Install zlib
package:
name: zlib
state: present
become: yes
- name: Create the file /tmp/some_file
path: '/tmp/some_file'
state: touch
```
2. First, edit the inventory file: `vi /etc/ansible/hosts`
```
[some_remote_host]
some.remoted.host.com
```
Run the playbook
`ansible-playbook first_playbook.yml`

View File

@@ -0,0 +1,8 @@
## My First Task - Solution
```
- name: Create a new directory
file:
path: "/tmp/new_directory"
state: directory
```

View File

@@ -0,0 +1,9 @@
## Update and Upgrade apt packages task - Solution
```
- name: "update and upgrade apt packages."
become: yes
apt:
upgrade: yes
update_cache: yes
```

View File

@@ -0,0 +1,3 @@
## Ansible - Update and upgrade APT packages task
1. Write a task to update and upgrade apt packages

3147
topics/aws/README.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,13 @@
## AWS IAM - Access Advisor
### Objectives
Go to the Access Advisor and answer the following questions regarding one of the users:
1. Are there services this user never accessed?
2. What was the last service the user has accessed?
3. What the Access Advisor is used/good for?
## Solution
Click [here to view to solution](solution.md)

View File

@@ -0,0 +1,18 @@
## AWS IAM - Access Advisor
### Objectives
Go to the Access Advisor and answer the following questions regarding one of the users:
1. Are there services this user never accessed?
2. What was the last service the user has accessed?
3. What the Access Advisor is used/good for?
### Solution
1. Go to AWS IAM service and click on "Users" under "Access Management"
2. Click on one of the users
3. Click on the "Access Advisor" tab
4. Check which service was last accessed and which was never accessed
Access Advisor can be good to evaluate whether there are services the user is not accessing (as in never or not frequently). This can be help in deciding whether some permissions should be revoked or modified.

View File

@@ -0,0 +1,15 @@
## AWS ELB - ALB Multiple Target Groups
### Requirements
Two EC2 instances with a simple web application that shows the web page with the string "Hey, it's a me, `<HOSTNAME>`!"
One EC2 instance with a simple web application that shows the web page with the string "Hey, it's only a test..." under the endpoint /test
### Objectives
1. Create an application load balancer for the two instances you have, with the following properties
1. healthy threshold: 3
2. unhealthy threshold: 3
3. interval: 10 seconds
2. Create another target group for the third instance
1. Traffic should be forwarded to this group based on the "/test" path

View File

@@ -0,0 +1,44 @@
## AWS ELB - ALB Multiple Target Groups
### Requirements
Two EC2 instances with a simple web application that shows the web page with the string "Hey, it's a me, `<HOSTNAME>`!"
One EC2 instance with a simple web application that shows the web page with the string "Hey, it's only a test..." under the endpoint /test
### Objectives
1. Create an application load balancer for the two instances you have, with the following properties
1. healthy threshold: 3
2. unhealthy threshold: 3
3. interval: 10 seconds
2. Create another target group for the third instance
1. Traffic should be forwarded to this group based on the "/test" path
### Solution
#### Console
1. Go to EC2 service
2. Click in the left side menu on "Load balancers" under "Load balancing"
3. Click on "Create load balancer"
4. Choose "Application Load Balancer"
5. Insert a name for the LB
6. Choose an AZ where you want the LB to operate
7. Choose a security group
8. Under "Listeners and routing" click on "Create target group" and choose "Instances"
1. Provide a name for the target group
2. Set healthy threshold to 3
3. Set unhealthy threshold to 3
4. Set interval to 10 seconds
5. Click on "Next" and choose two out of three instances you've created
6. Click on "Create target group"
9. Refresh target groups and choose the one you've just created
10. Click on "Create load balancer" and wait for it to be provisioned
11. In the left side menu click on "Target Groups" under "Load Balancing"
12. Click on "Create target group"
13. Set it with the same properties as previous target group but this time, add the third instance that you didn't include in the previous target group
14. Go back to your ALB and under "Listeners" click on "Edit rules" under your current listener
1. Add a rule where if the path is "/test" then traffic should be forwarded to the second target group you've created
2. Click on "Save"
15. Test it by going to the browser, insert the address and add "/test" to the address

View File

@@ -0,0 +1,13 @@
## AWS ELB - Application Load Balancer
### Requirements
Two EC2 instances with a simple web application that shows the web page with the string "Hey, it's a me, `<HOSTNAME>`!"
### Objectives
1. Create an application load balancer for the two instances you have, with the following properties
1. healthy threshold: 3
2. unhealthy threshold: 3
3. interval: 10 seconds
2. Verify load balancer is working (= you get reply from both instances at different times)

View File

@@ -0,0 +1,35 @@
## AWS ELB - Application Load Balancer
### Requirements
Two EC2 instances with a simple web application that shows the web page with the string "Hey, it's a me, `<HOSTNAME>`!"
### Objectives
1. Create an application load balancer for the two instances you have, with the following properties
1. healthy threshold: 3
2. unhealthy threshold: 3
3. interval: 10 seconds
2. Verify load balancer is working (= you get reply from both instances at different times)
### Solution
#### Console
1. Go to EC2 service
2. Click in the left side menu on "Load balancers" under "Load balancing"
3. Click on "Create load balancer"
4. Choose "Application Load Balancer"
5. Insert a name for the LB
6. Choose an AZ where you want the LB to operate
7. Choose a security group
8. Under "Listeners and routing" click on "Create target group" and choose "Instances"
1. Provide a name for the target group
2. Set healthy threshold to 3
3. Set unhealthy threshold to 3
4. Set interval to 10 seconds
5. Click on "Next" and choose the two of the instances you've created
6. Click on "Create target group"
9. Refresh target groups and choose the one you've just created
10. Click on "Create load balancer" and wait for it to be provisioned
11. Copy DNS address and paste it in the browser. If you refresh, you should see different message based on the instance where the traffic was routed to

View File

@@ -0,0 +1,16 @@
## AWS Auto Scaling Groups - Dynamic Scaling Policy
### Requirements
1. Existing Auto Scaling Group with maximum capacity set to at least 3
2. One running EC2 instance with max of 4 CPUs
### Objectives
1. Create a dynamic scaling policy with the following properties
1. Track average CPU utilization
2. Target value should be 70%
2. Increase the CPU utilization to at least 70%
1. Do you see change in number of instances?
1. Decrease CPU utilization to less than 70%
1. Do you see change in number of instances?

View File

@@ -0,0 +1,37 @@
## AWS Auto Scaling Groups - Dynamic Scaling Policy
### Requirements
1. Existing Auto Scaling Group with maximum capacity set to at least 3
2. One running EC2 instance with max of 4 CPUs
### Objectives
1. Create a dynamic scaling policy with the following properties
1. Track average CPU utilization
2. Target value should be 70%
2. Increase the CPU utilization to at least 70%
1. Do you see change in number of instances?
1. Decrease CPU utilization to less than 70%
1. Do you see change in number of instances?
### Solution
#### Console
1. Go to EC2 service -> Auto Scaling Groups and click on the tab "Automating scaling"
2. Choose "Target tracking scaling" under "Policy Type"
3. Set metric type to Average CPU utilization
4. Set target value to 70% and click on "Create"
1. If you are using Amazon Linux 2, you can stress the instance with the following:
```
sudo amazon-linux-extras install epel -y
sudo yum install stress -y
stress -c 4 # assuming you have 4 CPUs
```
2. Yes, additional EC2 instance was added
1. Simply stop the stress command
2. Yes, one of the EC2 instances was terminated

View File

@@ -0,0 +1,14 @@
## AWS Databases - Aurora DB
### Objectives
1. Create an Aurora database with the following properties
* Edition: MySQL
* Instance type: db.t3.small
* A reader node in a different AZ
* Public access should be enabled
* Port should be set to 3306
* DB name: 'db'
* Backup retention: 10 days
2. How many instances does your DB cluster has?

View File

@@ -0,0 +1,37 @@
## AWS Databases - Aurora DB
### Objectives
1. Create an Aurora database with the following properties
* Edition: MySQL
* Instance type: db.t3.small
* A reader node in a different AZ
* Public access should be enabled
* Port should be set to 3306
* DB name: 'db'
* Backup retention: 10 days
2. How many instances does your DB cluster has?
### Solution
#### Console
1. Go to RDS service
2. Click on "Databases" in the left side menu and click on the "Create database" button
3. Choose "standard create"
4. Choose "Aurora DB"
5. Choose "MySQL" edition and "Provisioned" as capacity type
6. Choose "single-master"
7. Specify Credentials (master username and password)
8. Choose DB instance type: Burstable classes, db.t3.small
9. Choose "Create an Aurora Replica or Reader node in a different AZ"
10. Choose a default VPC and subnet
11. Check "Yes" for public access
12. Database port should be 3306
13. For authentication, choose "Password and IAM database authentication"
14. Set initial database name as "db"
15. Increase backup retention period to 10 days
16. Click on "Create database" button
1. Two instances - one reader and one writer

View File

@@ -0,0 +1,21 @@
## AWS Auto Scaling Groups - Basics
### Requirements
Zero EC2 instances running
### Objectives
A. Create a scaling group for web servers with the following properties:
* Amazon Linux 2 AMI
* t2.micro as the instance type
* user data:
```
yum install -y httpd
systemctl start httpd
systemctl enable httpd
```
B. Were new instances created since you created the auto scaling group? How many? Why?
C. Change desired capacity to 2. Did it launch more instances?
D. Change back the desired capacity to 1. What is the result of this action?

View File

@@ -0,0 +1,48 @@
## AWS Auto Scaling Groups - Basics
### Requirements
Zero EC2 instances running
### Objectives
A. Create a scaling group for web servers with the following properties:
* Amazon Linux 2 AMI
* t2.micro as the instance type
* user data:
```
yum install -y httpd
systemctl start httpd
systemctl enable httpd
```
B. Were new instances created since you created the auto scaling group? How many? Why?
C. Change desired capacity to 2. Did it launch more instances?
D. Change back the desired capacity to 1. What is the result of this action?
### Solution
#### Console
A.
1. Go to EC2 service
2. Click on "Auto Scaling Groups" under "Auto Scaling"
3. Click on "Create Auto Scaling Group"
4. Insert a name
5. Click on "Create a launch template"
1. Insert a name and a version for the template
2. Select an AMI to use (Amazon Linux 2)
3. Select t2.micro instance type
4. Select a key pair
5. Attach a security group
6. Under "Advanced" insert the user data
7. Click on "Create"
6. Choose the launch template you've just created and click on "Next"
7. Choose "Adhere to launch template"
8. Choose in which AZs to launch and click on "Next"
9. Link it to ALB (if you don't have one, create it)
10. Mark ELB health check in addition to EC2. Click on "Next" until you reach the review page and click on "Create auto scaling group"
B. One instance was launched to met the criteria of the auto scaling group we've created. The reason it launched only one is due to "Desired capacity" set to 1.
C. Change it by going to your auto scaling group -> Details -> Edit -> "2 desired capacity". This should create another instance if only one is running
D. Reducing desired capacity back to 1 will terminate one of the instances (assuming 2 are running).

View File

@@ -0,0 +1,5 @@
## AWS - Budget Setup
### Objectives
Setup a cost budget in your AWS account based on your needs.

View File

@@ -0,0 +1,18 @@
## AWS - Budget Setup
### Objectives
Setup a cost budget in your AWS account based on your needs.
### Solution
1. Go to "Billing"
2. Click on "Budgets" in the menu
3. Click on "Create a budget"
4. Choose "Cost Budget" and click on "Next"
5. Choose the values that work for you. For example, recurring monthly budget with a specific amount
6. Insert a budget name and Click on "Next"
7. Set up an alert but clicking on "Add an alert threshold"
1. Set a threshold (e.g. 75% of budgeted amount)
2. Set an email where a notification will be sent
8. Click on "Next" until you can click on "Create a budget"

View File

@@ -0,0 +1,11 @@
## EC2 - Create an AMI
### Requirements
One running EC2 instance
### Objectives
1. Make some changes in the operating system of your instance (create files, modify files, ...)
2. Create an AMI image from running EC2 instance
3. Launch a new instance using the custom AMI you've created

View File

@@ -0,0 +1,20 @@
## EC2 - Create an AMI
### Requirements
One running EC2 instance
### Objectives
1. Make some changes in the operating system of your instance (create files, modify files, ...)
2. Create an AMI image from running EC2 instance
3. Launch a new instance using the custom AMI you've created
### Solution
1. Connect to your EC2 instance (ssh, console, ...)
2. Make some changes in the operating system
3. Go to EC2 service
4. Right click on the instance where you made some changes -> Image and templates -> Create image
5. Give the image a name and click on "Create image"
6. Launch new instance and choose the image you've just created

View File

@@ -0,0 +1,12 @@
## AWS - Create EFS
### Requirements
Two EC2 instances in different availability zones
### Objectives
1. Create an EFS with the following properties
1. Set lifecycle management to 60 days
2. The mode should match a use case of scaling to high levels of throughput and I/O operations per second
2. Mount the EFS in both of your EC2 instances

View File

@@ -0,0 +1,27 @@
## AWS - Create EFS
### Requirements
Two EC2 instances in different availability zones
### Objectives
1. Create an EFS with the following properties
1. Set lifecycle management to 60 days
2. The mode should match a use case of scaling to high levels of throughput and I/O operations per second
2. Mount the EFS in both of your EC2 instances
### Solution
1. Go to EFS console
2. Click on "Create file system"
3. Create on "customize"
1. Set lifecycle management to "60 days since last access"
2. Set Performance mode to "MAX I/O" due to the requirement of "Scaling to high levels of throughput"
3. Click on "Next"
4. Choose security group to attach (if you don't have any, create one and make sure it has a rule to allow NFS traffic) and click on "Next" until you are able to review and create it
5. SSH into your EC2 instances
1. Run `sudo yum install -y amazon-efs-utils`
2. Run `mkdir efs`
3. If you go to your EFS page and click on "Attach", you can see what ways are there to mount your EFS on your instancess
1. The command to mount the EFS should be similar to `sudo mount -t efs -o tls <EFS name>:/ efs` - copy and paste it in your ec2 instance's OS

View File

@@ -0,0 +1,16 @@
## AWS - Create a Role
### Objectives
Create a basic role to provide EC2 service with Full IAM access permissions.<br>
In the end, run from the CLI (or CloudShell) the command to verify the role was created.
### Solution
1. Go to AWS console -> IAM
2. Click in the left side menu on "Access Manamgement" -> Roles
3. Click on "Create role"
3. Choose "AWS service" as the type of trusted entity and then choose "EC2" as a use case. Click on "Next"
4. In permissions page, check "IAMFullAccess" and click on "Next" until you get to "Review" page
5. In the "Review" page, give the role a name (e.g. IAMFullAcessEC2), provide a short description and click on "Create role"
6. `aws iam list-roles` will list all the roles in the account, including the one we've just created.

View File

@@ -0,0 +1,16 @@
## AWS - Create a Role
### Objectives
Create a basic role to provide EC2 service with Full IAM access permissions.<br>
In the end, run from the CLI (or CloudShell) the command to verify the role was created.
### Solution
1. Go to AWS console -> IAM
2. Click in the left side menu on "Access Manamgement" -> Roles
3. Click on "Create role"
3. Choose "AWS service" as the type of trusted entity and then choose "EC2" as a use case. Click on "Next"
4. In permissions page, check "IAMFullAccess" and click on "Next" until you get to "Review" page
5. In the "Review" page, give the role a name (e.g. IAMFullAcessEC2), provide a short description and click on "Create role"
6. `aws iam list-roles` will list all the roles in the account, including the one we've just created.

View File

@@ -0,0 +1,9 @@
## AWS EC2 - Spot Instances
### Objectives
A. Create two Spot instances using a Spot Request with the following properties:
* Amazon Linux 2 AMI
* 2 instances as target capacity (at any given point of time) while each one has 2 vCPUs and 3 GiB RAM
B. Create a single Spot instance using Amazon Linux 2 and t2.micro

View File

@@ -0,0 +1,35 @@
## AWS EC2 - Spot Instances
### Objectives
A. Create two Spot instances using a Spot Request with the following properties:
* Amazon Linux 2 AMI
* 2 instances as target capacity (at any given point of time) while each one has 2 vCPUs and 3 GiB RAM
B. Create a single Spot instance using Amazon Linux 2 and t2.micro
### Solution
A. Create Spot Fleets:
1. Go to EC2 service
2. Click on "Spot Requests"
3. Click on "Request Spot Instances" button
4. Set the following values for parameters:
* Amazon Linux 2 AMI
* Total target capacity -> 2
* Check "Maintain target capacity"
* vCPUs: 2
* Memory: 3 GiB RAM
5. Click on Launch
B. Create a single Spot instance:
1. Go to EC2 service
2. Click on "Instances"
3. Click on "Launch Instances"
4. Choose "Amazon Linux 2 AMI" and click on "Next"
5. Choose t2.micro and click on "Next: Configure Instance Details"
6. Select "Request Spot instances"
7. Set Maximum price above current price
8. Click on "Review and Launch"

View File

@@ -0,0 +1,9 @@
## IAM AWS - Create a User
### Objectives
As you probably know at this point, it's not recommended to work with the root account in AWS. For this reason you are going to create a new account which you'll use regularly as the admin account.
1. Create a user with password credentials
2. Add the newly created user to a group called "admin" and attach to it the policy called "Administrator Access"
3. Make sure the user has a tag called with the key `Role` and the value `DevOps`

View File

@@ -0,0 +1,25 @@
## IAM AWS - Create a User
### Objectives
As you probably know at this point, it's not recommended to work with the root account in AWS. For this reason you are going to create a new account which you'll use regularly as the admin account.
1. Create a user with password credentials
2. Add the newly created user to a group called "admin" and attach to it the policy called "Administrator Access"
3. Make sure the user has a tag called with the key `Role` and the value `DevOps`
### Solution
1. Go to the AWS IAM service
2. Click on "Users" in the right side menu (right under "Access Management")
3. Click on the button "Add users"
4. Insert the user name (e.g. mario)
5. Select the credential type: "Password"
6. Set console password to custom and click on "Next"
7. Click on "Add user to group"
8. Insert "admin" as group name
9. Check the "AdministratorAccess" policy and click on "Create group"
10. Click on "Next: Tags"
11. Add a tag with the key `Role` and the value `DevOps`
12. Click on "Review" and then create on "Create user"

View File

@@ -0,0 +1,14 @@
## AWS Route 53 - Creating Records
### Requirements
At least one registered domain
### Objectives
1. Create the following record for your domain:
1. Record name: foo
2. Record type: A
3. Set some IP in the value field
2. Verify from the shell that you are able to use the record you've created to lookup for the IP address by using the domain name

View File

@@ -0,0 +1,26 @@
## AWS Route 53 - Creating Records
### Requirements
At least one registered domain
### Objectives
1. Create the following record for your domain:
1. Record name: foo
2. Record type: A
3. Set some IP in the value field
2. Verify from the shell that you are able to use the record you've created to lookup for the IP address by using the domain name
### Solution
1. Go to Route 53 service -> Hosted zones
2. Click on your domain name
3. Click on "Create record"
4. Insert "foo" in "Record name"
5. Set "Record type" to A
6. In "Value" insert "201.7.20.22"
7. Click on "Create records"
1. In your shell, type `nslookup foo.<YOUR DOMAIN>` or `dig foo.<YOUR NAME`

View File

@@ -0,0 +1,9 @@
## AWS - Credential Report
### Objectives
1. Create/Download a credential report
2. Answer the following questions based on the report:
1. Are there users with MFA not activated?
2. Are there users with password enabled that didn't
3. Explain the use case for using the credential report

View File

@@ -0,0 +1,18 @@
## AWS - Credential Report
### Objectives
1. Create/Download a credential report
2. Answer the following questions based on the report:
1. Are there users with MFA not activated?
2. Are there users with password enabled that didn't
3. Explain the use case for using the credential report
### Solution
1. Go to the AWS IAM service
2. Under "Access Reports" click on "Credential report"
3. Click on "Download Report" and open it once it's downloaded
4. Answer the questions in this exercises by inspecting the report
The credential report is useful to identify whether there any users who need assistance or attention in regards to their security. For example a user who didn't change his password for a long time and didn't activate MFA.

View File

@@ -0,0 +1,13 @@
## AWS EC2 - EBS Volume Creation
### Requirements
One EC2 instance that you can get rid of :)
### Objectives
1. Create a volume in the same AZ as your instance, with the following properties:
1. gp2 volume type
2. 4 GiB size
2. Once created, attach it to your EC2 instance
3. Remove your EC2 instance. What happened to the EBS volumes attached to the EC2 instance?

View File

@@ -0,0 +1,29 @@
## AWS EC2 - EBS Volume Creation
### Requirements
One EC2 instance that you can get rid of :)
### Objectives
1. Create a volume in the same AZ as your instance, with the following properties:
1. gp2 volume type
2. 4 GiB size
2. Once created, attach it to your EC2 instance
3. Remove your EC2 instance. What happened to the EBS volumes attached to the EC2 instance?
### Solution
1. Go to EC2 service
2. Click on "Volumes" under "Elastic Block Store"
3. Click on "Create Volume"
4. Select the following properties
1. gp2 volume type
2. 4 GiB size
3. The same AZ as your instance
5. Click on "Create volume"
6. Right click on the volume you've just created -> attach volume -> choose your EC2 instance and click on "Attach"
7. Terminate your instance
8. The default EBS volume (created when you launched the instance for the first time) will be deleted (unless you didn't check "Delete on termination"), but the volume you've created as part of this exercise, will remain
Note: don't forget to remove the EBS volume you've created in this exercise

View File

@@ -0,0 +1,11 @@
## AWS EC2 - IAM Roles
### Requirements
1. Running EC2 instance without any IAM roles (so you if you connect the instance and try to run AWS commands, it fails)
2. IAM role with "IAMReadOnlyAccess" policy
### Objectives
1. Attach a role (and if such role doesn't exists, create it) with "IAMReadOnlyAccess" policy to the EC2 instance
2. Verify you can run AWS commands in the instance

View File

@@ -0,0 +1,21 @@
## AWS EC2 - IAM Roles
### Requirements
1. Running EC2 instance without any IAM roles (so you if you connect the instance and try to run AWS commands, it fails)
2. IAM role with "IAMReadOnlyAccess" policy
### Objectives
1. Attach a role (and if such role doesn't exists, create it) with "IAMReadOnlyAccess" policy to the EC2 instance
2. Verify you can run AWS commands in the instance
### Solution
#### Console
1. Go to EC2 service
2. Click on the instance to which you would like to attach the IAM role
3. Click on "Actions" -> "Security" -> "Modify IAM Role"
4. Choose the IAM role with "IAMReadOnlyAccess" policy and click on "Save"
5. Running AWS commands now in the instance should work fine (e.g. `aws iam list-users`)

View File

@@ -0,0 +1,9 @@
## AWS Containers - Run Tasks
Note: this costs money
### Objectives
Create a task in ECS to launch in Fargate.
The task itself can be a sample app.

View File

@@ -0,0 +1,26 @@
## AWS Containers - Run Tasks
Note: this costs money
### Objectives
Create a task in ECS to launch in Fargate.
The task itself can be a sample app.
### Solution
#### Console
1. Go to Elastic Container Service page
2. Click on "Get Started"
3. Choose "sample-app"
4. Verify it's using Farget and not ECS (EC2 Instance) and click on "Next"
5. Select "None" in Load balancer type and click on "Next"
6. Insert cluster name (e.g. my_cluster) and click on "Next"
7. Review everything and click on "Create"
8. Wait for everything to complete
1. Go to clusters page and check the status of the task (it will take a couple of seconds/minutes before changing to "Running")
1. Click on the task and you'll see the launch type is Fargate

View File

@@ -0,0 +1,18 @@
## AWS Elastic Beanstalk - Node.js
### Requirements
1. Having a running node.js application on AWS Elastic Beanstalk platform
### Objectives
1. Create an AWS Elastic Beanstalk application with the basic properties
a. No ALB, No Database, Just use the default platform settings
### Out of scope
1. Having ALB attached in place
2. Having custom domain name in place
3. Having automated pipelines in place
4. Having blue-green deployment in place
5. Writing the Node.js application

View File

@@ -0,0 +1,52 @@
## AWS Elastic Beanstalk - Node.js
### Prerequisites
1. make sure the node.js application has a _npm start_ command specified in the __package.json__ file like the following example
```
{
"name": "application-name",
"version": "0.0.1",
"private": true,
"scripts": {
"start": "node app"
},
"dependencies": {
"express": "3.1.0",
"jade": "*",
"mysql": "*",
"async": "*",
"node-uuid": "*"
}
```
2. zip the application, and make sure to not zip the parent folder, only the files together, like:
```
\Parent - (exclude the folder itself from the the zip)
- file1 - (include in zip)
- subfolder1 (include in zip)
- file2 (include in zip)
- file3 (include in zip)
```
### Solution
1. Create a "New Environment"
2. Select Environment => _Web Server Environment_
3. Fill the Create a web server environment section
a. Fill the "Application Name"
4. Fill the Environment information section
a. Fill the "Environment Name"
b. Domain - "Leave for autogenerated value"
5. Platform
a. Choose Platform => _node.js_
6. Application Code => upload the Zipped Code from your local computer
7. Create Environment
8. Wait for the environment to come up
9. Check the website
a. Navigate to the _Applications_ tab,
b. select the recently created node.js app
c. click on the URL - highlighted
### Documentation
[Elastic Beanstalk / Node.js getting started](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/nodejs-getstarted.html)

View File

@@ -0,0 +1,10 @@
## AWS EC2 - Elastic IP
### Requirements
* An EC2 instance with public IP (not elastic IP)
### Objectives
1. Write down the public IP of your EC2 instance somewhere and stop & start the instance. Does the public IP address is the same? why?
2. Handle this situation so you have the same public IP even after stopping and starting the instance

View File

@@ -0,0 +1,28 @@
## AWS EC2 - Elastic IP
### Requirements
* An EC2 instance with public IP (not elastic IP)
### Objectives
1. Write down the public IP of your EC2 instance somewhere and stop & start the instance. Does the public IP address is the same? why?
2. Handle this situation so you have the same public IP even after stopping and starting the instance
### Solution
1. Go to EC2 service -> Instances
1. Write down current public IP address
2. Click on "Instance state" -> Stop instance -> Stop
3. Click on "Instance state" -> Start Instance
4. Yes, the public IP address has changed
2. Let's use an Elastic IP address
1. In EC2 service, under "Network & Security" click on "Elastic IP"
2. Click on the "Allocate elastic IP address" button
3. Make sure you select "Amazon's pool of IPv4 addresses" and click on "Allocate"
4. Click on "Actions" and then "Associate Elastic IP address"
1. Select "instance", choose your instance and provide its private IP address
2. Click on "Associate"
5. Now, if we go back to the instance page, we can see it is using the Elastic IP address as its public IP
Note: to remove it, use "disassociate" option and don't forget to also release it so you won't be billed.

View File

@@ -0,0 +1,10 @@
## AWS EC2 - Elastic Network Interfaces
### Requirements
* An EC2 instance with network interface
### Objectives
A. Create a network interface and attach it to the EC2 instance that already has one network interface
B. Explain why would anyone use two network interfaces

View File

@@ -0,0 +1,25 @@
## AWS EC2 - Elastic Network Interfaces
### Requirements
* An EC2 instance with network interface
### Objectives
A. Create a network interface and attach it to the EC2 instance that already has one network interface
B. Explain why would anyone use two network interfaces
### Solution
A.
1. Go to EC2 service
2. Click on "Network Interfaces" under "Network & Security"
3. Click on "Create network interface"
4. Provide a description
5. Choose a subnet (one that is in the AZ as the instance)
6. Optionally attach a security group and click on "Create network interface"
7. Click on "Actions" -> "Attach" and choose the instance to attach it to
8. If you go now to "Instances" page you'll see your instance has two network interfaces
B.
1. You can move the second network interface between instances. This allows us to create kind of a failover mechanism between the instances.

View File

@@ -0,0 +1,7 @@
## AWS ElastiCache
### Objectives
1. Create ElastiCache Redis
* Instance type should be "cache.t2.micro"
* Replicas should be 0

View File

@@ -0,0 +1,20 @@
## AWS ElastiCache
### Objectives
1. Create ElastiCache Redis
* Instance type should be "cache.t2.micro"
* Replicas should be 0
### Solution
#### Console
1. Go to ElastiCache service
2. Click on "Get Started Now"
3. Choose "Redis"
4. Insert a name and description
5. Choose "cache.t2.micro" an node type
6. Set number of replicas to 0
7. Create new subnet group
8. Click on "Create"

View File

@@ -0,0 +1,14 @@
## AWS Route 53 - Health Checks
## Requirements
3 web instances in different AZs.
## Objectives
1. For each instance create a health checks with the following properties:
1. Name it after the AZ where the instance resides
2. Failure threshold should be 5
2. Edit the security group of one of your instances and remove HTTP rules.
1. Did it change the status of the health check?

View File

@@ -0,0 +1,33 @@
## AWS Route 53 - Health Checks
## Requirements
3 web instances in different AZs.
## Objectives
1. For each instance create a health checks with the following properties:
1. Name it after the AZ where the instance resides
2. Failure threshold should be 5
2. Edit the security group of one of your instances and remove HTTP rules.
1. Did it change the status of the health check?
### Solution
#### Console
1. Go to Route 53
2. Click on "Health Checks" in the left-side menu
3. Click on "Create health check"
4. Insert the name: us-east-2
5. What to monitor: endpoint
6. Insert the IP address of the instance
7. Insert the endpoint /health if your web instance supports that endpoint
8. In advanced configuration, set Failure threshold to 5
9. Click on "next" and then on "Create health check"
10. Repeat steps 1-9 for the other two instances you have
1. Go to security group of one of your instances
2. Click on "Actions" -> Edit inbound rules -> Delete HTTP based rules
3. Go back to health checks page and after a couple of seconds you should see that the status becomes "unhealthy"

View File

@@ -0,0 +1,3 @@
## Hello Function
Create a basic AWS Lambda function that when given a name, will return "Hello <NAME>"

View File

@@ -0,0 +1,49 @@
## Hello Function - Solution
### Exercise
Create a basic AWS Lambda function that when given a name, will return "Hello <NAME>"
### Solution
#### Define a function
1. Go to Lambda console panel and click on `Create function`
1. Give the function a name like `BasicFunction`
2. Select `Python3` runtime
3. Now to handle function's permissions, we can attach IAM role to our function either by setting a role or creating a new role. I selected "Create a new role from AWS policy templates"
4. In "Policy Templates" select "Simple Microservice Permissions"
1. Next, you should see a text editor where you will insert a code similar to the following
#### Function's code
```
import json
def lambda_handler(event, context):
firstName = event['name']
return 'Hello ' + firstName
```
2. Click on "Create Function"
#### Define a test
1. Now let's test the function. Click on "Test".
2. Select "Create new test event"
3. Set the "Event name" to whatever you'd like. For example "TestEvent"
4. Provide keys to test
```
{
"name": 'Spyro'
}
```
5. Click on "Create"
#### Test the function
1. Choose the test event you've create (`TestEvent`)
2. Click on the `Test` button
3. You should see something similar to `Execution result: succeeded`
4. If you'll go to AWS CloudWatch, you should see a related log stream

View File

@@ -0,0 +1,8 @@
## AWS EC2 - Hibernate an Instance
### Objectives
1. Create an instance that supports hibernation
2. Hibernate the instance
3. Start the instance
4. What way is there to prove that instance was hibernated from OS perspective?

View File

@@ -0,0 +1,25 @@
## AWS EC2 - Hibernate an Instance
### Objectives
1. Create an instance that supports hibernation
2. Hibernate the instance
3. Start the instance
4. What way is there to prove that instance was hibernated from OS perspective?
### Solution
1. Create an instance that supports hibernation
1. Go to EC2 service
2. Go to instances and create an instance
3. In "Configure instance" make sure to check "Enable hibernation as an additional stop behavior"
4. In "Add storage", make sure to encrypt EBS and make sure the size > instance RAM size (because hibernation saves the RAM state)
5. Review and Launch
2. Hibernate the instance
1. Go to the instance page
2. Click on "Instance state" -> "Hibernate instance" -> Hibernate
3. Instance state -> Start
4. Run the "uptime" command, which will display the amount of time the system was up

View File

@@ -0,0 +1,15 @@
## AWS - Launch EC2 Web Instance
### Objectives
Launch one EC2 instance with the following requirements:
1. Amazon Linux 2 image
2. Instance type: pick up one that has 1 vCPUs and 1 GiB memory
3. Instance storage should be deleted upon the termination of the instance
4. When the instance starts, it should install:
1. Install the httpd package
2. Start the httpd service
3. Make sure the content of /var/www/html/index.html is `I made it! This is is awesome!`
5. It should have the tag: "Type: web" and the name of the instance should be "web-1"
6. HTTP traffic (port 80) should be accepted from anywhere

View File

@@ -0,0 +1,39 @@
## AWS - Launch EC2 Web Instance
### Objectives
Launch one EC2 instance with the following requirements:
1. Amazon Linux 2 image
2. Instance type: pick up one that has 1 vCPUs and 1 GiB memory
3. Instance storage should be deleted upon the termination of the instance
4. When the instance starts, it should install:
1. Install the httpd package
2. Start the httpd service
3. Make sure the content of /var/www/html/index.html is `I made it! This is is awesome!`
5. It should have the tag: "Type: web" and the name of the instance should be "web-1"
6. HTTP traffic (port 80) should be accepted from anywhere
### Solution
1. Choose a region close to you
2. Go to EC2 service
3. Click on "Instances" in the menu and click on "Launch instances"
4. Choose image: Amazon Linux 2
5. Choose instance type: t2.micro
6. Make sure "Delete on Termination" is checked in the storage section
7. Under the "User data" field the following:
```
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "<h1>I made it! This is is awesome!</h1>" > /var/www/html/index.html
```
8. Add tags with the following keys and values:
* key "Type" and the value "web"
* key "Name" and the value "web-1"
9. In the security group section, add a rule to accept HTTP traffic (TCP) on port 80 from anywhere
10. Click on "Review" and then click on "Launch" after reviewing.
11. If you don't have a key pair, create one and download it.

View File

@@ -0,0 +1,14 @@
## AWS Databases - MySQL DB
### Objectives
1. Create a MySQL database with the following properties
* Instance type: db.t2.micro
* gp2 storage
* Storage Auto scaling should be enabled and threshold should be set to 500 GiB
* Public access should be enabled
* Port should be set to 3306
* DB name: 'db'
* Backup retention: 10 days
2. Create read replica for the database you've created

View File

@@ -0,0 +1,42 @@
## AWS Databases - MySQL DB
### Objectives
1. Create a MySQL database with the following properties
* Instance type: db.t2.micro
* gp2 storage
* Storage Auto scaling should be enabled and threshold should be set to 500 GiB
* Public access should be enabled
* Port should be set to 3306
* DB name: 'db'
* Backup retention: 10 days
2. Create read replica for the database you've created
### Solution
#### Console
1. Go to RDS service
2. Click on "Databases" in the left side menu and click on the "Create database" button
3. Choose "standard create"
4. Choose "MySQL" and the recommended version
5. Choose "Production" template
6. Specify DB instance identifier
7. Specify Credentials (master username and password)
8. Choose DB instance type: Burstable classes, db.t2.micro
9. Choose "gp2" as storage
10. Enable storage autoscalling: maximum storage threshold of 500 GiB
11. Choose "Do not create a standby instance"
12. Choose a default VPC and subnet
12. Check "Yes" for public access
13. Choose "No preference" for AZ
14. Database port should be 3306
15. For authentication, choose "Password and IAM database authentication"
16. Set initial database name as "db"
17. Increase backup retention period to 10 days
18. Click on "Create database" button
1. Go to the database under "Databases" in the left side menu
2. Click on "Actions" -> Create read replica
3. Click on "Create read replica"

View File

@@ -0,0 +1,13 @@
## AWS ELB - Network Load Balancer
### Requirements
Two running EC2 instances
### Objectives
1. Create a network load balancer
1. healthy threshold: 3
2. unhealthy threshold: 3
3. interval: 10 seconds
4. Listener should be using TCP protocol on port 80

View File

@@ -0,0 +1,35 @@
## AWS ELB - Network Load Balancer
### Requirements
Two running EC2 instances
### Objectives
1. Create a network load balancer
1. healthy threshold: 3
2. unhealthy threshold: 3
3. interval: 10 seconds
4. Listener should be using TCP protocol on port 80
### Solution
#### Console
1. Go to EC2 service
2. Click in the left side menu on "Load balancers" under "Load balancing"
3. Click on "Create load balancer"
4. Choose "Network Load Balancer"
5. Insert a name for the LB
6. Choose AZs where you want the LB to operate
7. Choose a security group
8. Under "Listeners and routing" click on "Create target group" and choose "Instances"
1. Provide a name for the target group
2. Set healthy threshold to 3
3. Set unhealthy threshold to 3
4. Set interval to 10 seconds
5. Set protocol to TCP and port to 80
6. Click on "Next" and choose two instances you have
7. Click on "Create target group"
9. Refresh target groups and choose the one you've just created
10. Click on "Create load balancer" and wait for it to be provisioned

View File

@@ -0,0 +1,6 @@
## AWS VPC - My First VPC
### Objectives
1. Create a new VPC
1. It should have a CIDR that supports using at least 60,000 hosts

View File

@@ -0,0 +1,17 @@
## AWS VPC - My First VPC
### Objectives
1. Create a new VPC
1. It should have a CIDR that supports using at least 60,000 hosts
### Solution
#### Console
1. Under "Virtual Private Cloud" click on "Your VPCs"
2. Click on "Create VPC"
3. Insert a name (e.g. someVPC)
4. Insert IPv4 CIDR block: 10.0.0.0/16
5. Keep "Tenancy" at Default
6. Click on "Create VPC"

View File

@@ -0,0 +1,8 @@
## No Application :'(
### Objectives
Explain what might be possible reasons for the following issues:
1. Getting "time out" when trying to reach an application running on EC2 instance
2. Getting "connection refused" error

View File

@@ -0,0 +1,21 @@
## No Application :'(
### Objectives
Explain what might be possible reasons for the following issues:
1. Getting "time out" when trying to reach an application running on EC2 instance
2. Getting "connection refused" error
### Solution
1. 'Time out' Can be due to one of the following:
* Security group doesn't allow access
* No host (yes, I know. Not the first thing to check and yet...)
* Operating system firewall blocking traffic
2. 'Connection refused' can happen due to one of the following:
* Application didn't launch properly or has some issue (doesn't listens on the designated port)
* Firewall replied with a reject instead of dropping the packets

View File

@@ -0,0 +1,12 @@
## AWS IAM - Password Policy & MFA
Note: DON'T perform this exercise unless you understand what you are doing and what is the outcome of applying these changes to your account
### Objectives
1. Create password policy with the following settings:
1. At least minimum 8 characters
2. At least one number
3. Prevent password reuse
2. Then enable MFA for the account.

View File

@@ -0,0 +1,32 @@
## AWS IAM - Password Policy & MFA
Note: DON'T perform this exercise unless you understand what you are doing and what is the outcome of applying these changes to your account
### Objectives
1. Create password policy with the following settings:
1. At least minimum 8 characters
2. At least one number
3. Prevent password reuse
2. Then enable MFA for the account.
### Solution
Password Policy:
1. Go to IAM service in AWS
2. Click on "Account settings" under "Access management"
3. Click on "Change password policy"
1. Check "Enforce minimum password length" and set it to 8 characters
1. Check "Require at least one number"
1. Check "Prevent password reuse"
4. Click on "Save changes"
MFA:
1. Click on the account name
2. Click on "My Security Credentials"
3. Expand "Multi-factor authentication (MFA)" and click on "Activate MFA"
4. Choose one of the devices
5. Follow the instructions to set it up and click on "Assign MFA"

View File

@@ -0,0 +1,6 @@
## AWS EC2 - Placement Groups
### Objectives
A. Create a placement group. It should be one with a low latency network. Make sure to launch an instance as part of this placement group.
B. Create another placement group. This time high availability is a priority

View File

@@ -0,0 +1,23 @@
## AWS EC2 - Placement Groups
### Objectives
A. Create a placement group. It should be one with a low latency network. Make sure to launch an instance as part of this placement group.
B. Create another placement group. This time high availability is a priority
### Solution
A.
1. Go to EC2 service
2. Click on "Placement Groups" under "Network & Security"
3. Click on "Create placement group"
4. Give it a name and choose the "Cluster" placement strategy because the requirement is low latency network
5. Click on "Create group"
6. Go to "Instances" and click on "Launch an instance". Choose any properties you would like, just make sure to check "Add instance to placement group" and choose the placement group you've created
B.
1. Go to EC2 service
2. Click on "Placement Groups" under "Network & Security"
3. Click on "Create placement group"
4. Give it a name and choose the "Spread" placement strategy because the requirement is high availability as top priority
5. Click on "Create group"

View File

@@ -0,0 +1,9 @@
## AWS Route 53 - Register Domain
### Objectives
Note: registering domain costs money. Don't do this exercise, unless you understand that you are going to register a domain and it's going to cost you money.
1. Register your own custom domain using AWS Route 53
2. What is the type of your domain?
3. How many records your domain has?

View File

@@ -0,0 +1,27 @@
## AWS Route 53 - Register Domain
### Objectives
Note: registering domain costs money. Don't do this exercise, unless you understand that you are going to register a domain and it's going to cost you money.
1. Register your own custom domain using AWS Route 53
2. What is the type of your domain?
3. How many records your domain has?
### Solution
1. Go to Route 53 service page
2. Click in the menu on "Registered Domains" under "Domains"
3. Click on "Register Domain"
4. Insert your domain
5. Check if it's available. If it is, add it to the cart
Note: registering domain costs money. Don't click on "continue", unless you understand that you are going to register a domain and it's going to cost you money.
6. Click on "Continue" and fill in your contact information
7. Choose if you want to renew it in the future automatically. Accept the terms and click on "Complete Order"
8. Go to hosted zones and you should see there your newly registered domain
1. The domain type is "Public"
1. The domain has 2 DNS records: NS and SOA

View File

@@ -0,0 +1,11 @@
## AWS Route 53 - Failover
### Requirements
A running EC2 web instance with an health check defined for it in Route 53
### Objectives
1. Create a failover record that will failover to another record if an health check isn't passing
1. Make sure TTL is 30
2. Associate the failover record with the health check you have

View File

@@ -0,0 +1,29 @@
## AWS Route 53 - Failover
### Requirements
A running EC2 web instance with an health check defined for it in Route 53
### Objectives
1. Create a failover record that will failover to another record if an health check isn't passing
1. Make sure TTL is 30
2. Associate the failover record with the health check you have
### Solution
#### Console
1. Go to Route 53 service
2. Click on "Hosted Zones" in the left-side menu
3. Click on your hosted zone
4. Click on "Created record"
5. Insert "failover" in record name and set record type to A
6. Insert the IP of your instance
7. Set the routing policy to failover
8. Set TTL to 30
9. Associate with an health check
10. Add another record with the same properties as the previous one
11. Click on "Create records"
12. Go to your EC2 instance and edit its security group to remove the HTTP rules
13. Use your web app and if you print the hotsname of your instance then you will notice, a failover was performed and a different EC2 instance is used

View File

@@ -0,0 +1,20 @@
## AWS EC2 - Security Groups
### Requirements
For this exercise you'll need:
1. EC2 instance with web application
2. Security group inbound rules that allow HTTP traffic
### Objectives
1. List the security groups you have in your account, in the region you are using
2. Remove the HTTP inbound traffic rule
3. Can you still access the application? What do you see/get?
4. Add back the rule
5. Can you access the application now?
## Solution
Click [here to view to solution](solution.md)

View File

@@ -0,0 +1,55 @@
## AWS EC2 - Security Groups
### Requirements
For this exercise you'll need:
1. EC2 instance with web application
2. Security group inbound rules that allow HTTP traffic
### Objectives
1. List the security groups you have in your account, in the region you are using
2. Remove the HTTP inbound traffic rule
3. Can you still access the application? What do you see/get?
4. Add back the rule
5. Can you access the application now?
### Solution
#### Console
1. Go to EC2 service - > Click on "Security Groups" under "Network & Security"
You should see at least one security group. One of them is called "default"
2. Click on the security group with HTTP rules and click on "Edit inbound rules".
Remove the HTTP related rules and click on "Save rules"
3. No. There is a time out because we removed the rule allowing HTTP traffic.
4. Click on the security group -> edit inbound rules and add the following rule:
* Type: HTTP
* Port range: 80
* Source: Anywhere -> 0.0.0.0/0
5. yes
#### CLI
1. `aws ec2 describe-security-groups` -> by default, there is one security group called "default", in a new account
2. Remove the rule:
```
aws ec2 revoke-security-group-ingress \
--group-name someHTTPSecurityGroup
--protocol tcp \
--port 80 \
--cidr 0.0.0.0/0
```
3. No. There is a time out because we removed the rule allowing HTTP traffic.
4. Add the rule we remove:
```
aws ec2 authorize-security-group-ingress \
--group-name someHTTPSecurityGroup
--protocol tcp \
--port 80 \
--cidr 0.0.0.0/0
```
5. yes

View File

@@ -0,0 +1,16 @@
## AWS EC2 - EBS Snapshots
### Requirements
EBS Volume
### Objectives
A. Create a snapshot of an EBS volume
B. Verify the snapshot was created
C. Move the data to another region
D. Create a volume out of it in a different AZ
## Solution
Click [here to view to solution](solution.md)

View File

@@ -0,0 +1,33 @@
## AWS EC2 - EBS Snapshots
### Requirements
EBS Volume
### Objectives
A. Create a snapshot of an EBS volume
B. Verify the snapshot was created
C. Move the data to another region
D. Create a volume out of it in a different AZ
### Solution
A.
1. Go to EC2 service
2. Click on "Volumes" under "Elastic Block Store"
3. Right click on the chosen volume -> Create snapshot
4. Insert a description and click on "Create Snapshot"
B.
1. Click on "Snapshots" under "Elastic Block Store"
2. You should see the snapshot you've created
C.
1. Select the snapshot and click on Actions -> Copy
2. Select a region to where the snapshot will be copied
D.
1. Select the snapshot and click on Actions -> Create volume
2. Choose a different AZ
3. Click on "Create Volume"

View File

@@ -0,0 +1,23 @@
## AWS VPC - Subnets
### Requirements
Single newly created VPC
### Objectives
1. Create a subnet in your newly created VPC
1. CIDR: 10.0.0.0/24
2. Name: NewSubnet1
2. Create additional subnet
1. CIDR: 10.0.1.0/24
2. Name: NewSubnet2
3. Different AZ compared to previous subnet
3. Create additional subnet
1. CIDR: 10.0.2.0/24
2. Name: NewSubnet3
3. Different AZ compared to previous subnets
## Solution
Click [here to view to solution](solution.md)

View File

@@ -0,0 +1,39 @@
## AWS VPC - Subnets
### Requirements
Single newly created VPC
### Objectives
1. Create a subnet in your newly created VPC
1. CIDR: 10.0.0.0/24
2. Name: NewSubnet1
2. Create additional subnet
1. CIDR: 10.0.1.0/24
2. Name: NewSubnet2
3. Different AZ compared to previous subnet
3. Create additional subnet
1. CIDR: 10.0.2.0/24
2. Name: NewSubnet3
3. Different AZ compared to previous subnets
### Solution
#### Console
1. Click on "Subnets" under "Virtual Private Cloud"
2. Make sure you filter by your newly created VPC (to not see the subnets in all other VPCs). You can do this in the left side menu
3. Click on "Create subnet"
4. Choose your newly created VPC
5. Set the subnet name to "NewSubnet1"
6. Choose AZ
7. Set CIDR to 10.0.0.0/24
8. Click on "Add new subnet"
9. Set the subnet name to "NewSubnet2"
10. Choose a different AZ
11. Set CIDR to 10.0.1.0/24
12. Click on "Add new subnet"
13. Set the subnet name to "NewSubnet3"
14. Choose a different AZ
15. Set CIDR to 10.0.2.0/24

View File

@@ -0,0 +1,7 @@
## URL Function
Create a basic AWS Lambda function that will be triggered when you enter a URL in the browser
## Solution
Click [here to view to solution](solution.md)

View File

@@ -0,0 +1,71 @@
## URL Function
Create a basic AWS Lambda function that will be triggered when you enter a URL in the browser
### Solution
#### Define a function
1. Go to Lambda console panel and click on `Create function`
1. Give the function a name like `urlFunction`
2. Select `Python3` runtime
3. Now to handle function's permissions, we can attach IAM role to our function either by setting a role or creating a new role. I selected "Create a new role from AWS policy templates"
4. In "Policy Templates" select "Simple Microservice Permissions"
1. Next, you should see a text editor where you will insert a code similar to the following
#### Function's code
```
import json
def lambda_handler(event, context):
firstName = event['name']
return 'Hello ' + firstName
```
2. Click on "Create Function"
#### Define a test
1. Now let's test the function. Click on "Test".
2. Select "Create new test event"
3. Set the "Event name" to whatever you'd like. For example "TestEvent"
4. Provide keys to test
```
{
"name": 'Spyro'
}
```
5. Click on "Create"
#### Test the function
1. Choose the test event you've create (`TestEvent`)
2. Click on the `Test` button
3. You should see something similar to `Execution result: succeeded`
4. If you'll go to AWS CloudWatch, you should see a related log stream
#### Define a trigger
We'll define a trigger in order to trigger the function when inserting the URL in the browser
1. Go to "API Gateway console" and click on "New API Option"
2. Insert the API name, description and click on "Create"
3. Click on Action -> Create Resource
4. Insert resource name and path (e.g. the path can be /hello) and click on "Create Resource"
5. Select the resource we've created and click on "Create Method"
6. For "integration type" choose "Lambda Function" and insert the lambda function name we've given to the function we previously created. Make sure to also use the same region
7. Confirm settings and any required permissions
8. Now click again on the resource and modify "Body Mapping Templates" so the template includes this:
```
{ "name": "$input.params('name')" }
```
9. Finally save and click on Actions -> Deploy API
#### Running the function
1. In the API Gateway console, in stages menu, select the API we've created and click on the GET option
2. You'll see an invoke URL you can click on. You might have to modify it to include the input so it looks similar to this: `.../hello?name=mario`
3. You should see in your browser `Hello Mario`

199
topics/azure/README.md Normal file
View File

@@ -0,0 +1,199 @@
# Azure
- [Azure](#azure)
- [Questions](#questions)
- [Azure 101](#azure-101)
- [Azure Resource Manager](#azure-resource-manager)
- [Compute](#compute)
- [Network](#network)
- [Storage](#storage)
- [Security](#security)
## Questions
### Azure 101
<details>
<summary>What is Azure Portal?</summary><br><b>
[Microsoft Docs](https://docs.microsoft.com/en-us/learn/modules/intro-to-azure-fundamentals/what-is-microsoft-azure): "The Azure portal is a web-based, unified console that provides an alternative to command-line tools. With the Azure portal, you can manage your Azure subscription by using a graphical user interface."
</b></details>
<details>
<summary>What is Azure Marketplace?</summary><br><b>
[Microsoft Docs](https://docs.microsoft.com/en-us/learn/modules/intro-to-azure-fundamentals/what-is-microsoft-azure): "Azure marketplace helps connect users with Microsoft partners, independent software vendors, and startups that are offering their solutions and services, which are optimized to run on Azure."
</b></details>
<details>
<summary>Explain availability sets and availability zones</summary><br><b>
An availability set is a logical grouping of VMs that allows Azure to understand how your application is built to provide redundancy and availability. It is recommended that two or more VMs are created within an availability set to provide for a highly available application and to meet the 99.95% Azure SLA.
</b></details>
<details>
<summary>What is Azure Policy?</summary><br><b>
</b></details>
<details>
<summary>What is the Azure Resource Manager? Can you describe the format for ARM templates?</summary><br><b>
</b></details>
<details>
<summary>Explain Azure managed disks</summary><br><b>
</b></details>
### Azure Resource Manager
<details>
<summary>Explain what's Azure Resource Manager</summary><br><b>
From [Azure docs](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview): "Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. You use management features, like access control, locks, and tags, to secure and organize your resources after deployment."
</b></details>
<details>
<summary>What's an Azure Resource Group?</summary><br><b>
From [Azure docs](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal): "A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group."
</b></details>
### Compute
<details>
<summary>What Azure compute services are you familiar with?</summary><br><b>
* Azure Virtual Machines
* Azure Batch
* Azure Service Fabric
* Azure Container Instances
* Azure Virtual Machine Scale Set?s
</b></details>
<details>
<summary>What "Azure Virtual Machines" service is used for?</summary><br><b>
Windows or Linux virtual machines
</b></details>
<details>
<summary>What "Azure Virtual Machine Scale Sets" service is used for?</summary><br><b>
Scaling Linux or Windows virtual machines used in Azure
</b></details>
<details>
<summary>What "Azure Functions" service is used for?</summary><br><b>
Azure Functions is the serverless compute service of Azure.
</b></details>
<details>
<summary>What "Durable Azure Function" are?</summary>
<br>
[Microsoft Learn](https://docs.microsoft.com/en-us/learn/modules/intro-to-azure-fundamentals/what-is-microsoft-azure): Durable Functions is an extension of Azure Functions that lets you write stateful functions in a serverless compute environment.
</details>
<details>
<summary>What "Azure Container Instances" service is used for?</summary><br><b>
Running containerized applications (without the need to provision virtual machines).
</b></details>
<details>
<summary>What "Azure Batch" service is used for?</summary><br><b>
Running parallel and high-performance computing applications
</b></details>
<details>
<summary>What "Azure Service Fabric" service is used for?</summary><br><b>
</b></details>
<details>
<summary>What "Azure Kubernetes" service is used for?</summary><br><b>
</b></details>
### Network
<details>
<summary>What Azure network services are you familiar with?</summary><br><b>
</b></details>
<details>
<summary>What's an Azure region?</summary><br><b>
</b></details>
<details>
<summary>What is the N-tier architecture?</summary><br><b>
</b></details>
### Storage
<details>
<summary>What Azure storage services are you familiar with?</summary><br><b>
</b></details>
<details>
<summary>What storage options Azure supports?</summary><br><b>
</b></details>
### Security
<details>
<summary>What is the Azure Security Center? What are some of its features?</summary><br><b>
It's a monitoring service that provides threat protection across all of the services in Azure.
More specifically, it:
* Provides security recommendations based on your usage
* Monitors security settings and continuously all the services
* Analyzes and identifies potential inbound attacks
* Detects and blocks malware using machine learning
</b></details>
<details>
<summary>What is Azure Active Directory?</summary><br><b>
Azure AD is a cloud-based identity service. You can use it as a standalone service or integrate it with existing Active Directory service you already running.
</b></details>
<details>
<summary>What is Azure Advanced Threat Protection?</summary><br><b>
</b></details>
<details>
<summary>What components are part of Azure ATP?</summary><br><b>
</b></details>
<details>
<summary>Where logs are stored in Azure Monitor?</summary><br><b>
</b></details>
<details>
<summary>Explain Azure Site Recovery</summary><br><b>
</b></details>
<details>
<summary>Explain what the advisor does</summary><br><b>
</b></details>
<details>
<summary>Explain VNet peering</summary><br><b>
</b></details>
<details>
<summary>Which protocols are available for configuring health probe</summary><br><b>
</b></details>
<details>
<summary>Explain Azure Active</summary><br><b>
</b></details>
<details>
<summary>What is a subscription? What types of subscriptions are there?</summary><br><b>
</b></details>
<details>
<summary>Explain what is a blob storage service</summary><br><b>
</b></details>

325
topics/cicd/README.md Normal file
View File

@@ -0,0 +1,325 @@
## CI/CD
### CI/CD Exercises
|Name|Topic|Objective & Instructions|Solution|Comments|
|--------|--------|------|----|----|
| Set up a CI pipeline | CI | [Exercise](ci_for_open_source_project.md) | | |
| Deploy to Kubernetes | Deployment | [Exercise](deploy_to_kubernetes.md) | [Solution](solutions/deploy_to_kubernetes/README.md) | |
| Jenkins - Remove Jobs | Jenkins Scripts | [Exercise](remove_jobs.md) | [Solution](solutions/remove_jobs_solution.groovy) | |
| Jenkins - Remove Builds | Jenkins Sripts | [Exercise](remove_builds.md) | [Solution](solutions/remove_builds_solution.groovy) | |
### CI/CD Self Assessment
<details>
<summary>What is Continuous Integration?</summary><br><b>
A development practice where developers integrate code into a shared repository frequently. It can range from a couple of changes every day or a week to a couple of changes in one hour in larger scales.
Each piece of code (change/patch) is verified, to make the change is safe to merge. Today, it's a common practice to test the change using an automated build that makes sure the code can integrated. It can be one build which runs several tests in different levels (unit, functional, etc.) or several separate builds that all or some has to pass in order for the change to be merged into the repository.
</b></details>
<details>
<summary>What is Continuous Deployment?</summary><br><b>
A development strategy used by developers to release software automatically into production where any code commit must pass through an automated testing phase. Only when this is successful is the release considered production worthy. This eliminates any human interaction and should be implemented only after production-ready pipelines have been set with real-time monitoring and reporting of deployed assets. If any issues are detected in production it should be easy to rollback to previous working state.
For more info please read [here](https://www.atlassian.com/continuous-delivery/continuous-deployment)
</b></details>
<details>
<summary>Can you describe an example of a CI (and/or CD) process starting the moment a developer submitted a change/PR to a repository?</summary><br><b>
There are many answers for such a question, as CI processes vary, depending on the technologies used and the type of the project to where the change was submitted.
Such processes can include one or more of the following stages:
* Compile
* Build
* Install
* Configure
* Update
* Test
An example of one possible answer:
A developer submitted a pull request to a project. The PR (pull request) triggered two jobs (or one combined job). One job for running lint test on the change and the second job for building a package which includes the submitted change, and running multiple api/scenario tests using that package. Once all tests passed and the change was approved by a maintainer/core, it's merged/pushed to the repository. If some of the tests failed, the change will not be allowed to merged/pushed to the repository.
A complete different answer or CI process, can describe how a developer pushes code to a repository, a workflow then triggered to build a container image and push it to the registry. Once in the registry, the k8s cluster is applied with the new changes.
</b></details>
<details>
<summary>What is Continuous Delivery?</summary><br><b>
A development strategy used to frequently deliver code to QA and Ops for testing. This entails having a staging area that has production like features where changes can only be accepted for production after a manual review. Because of this human entanglement there is usually a time lag between release and review making it slower and error prone as compared to continuous deployment.
For more info please read [here](https://www.atlassian.com/continuous-delivery/continuous-deployment)
</b></details>
<details>
<summary>What is difference between Continuous Delivery and Continuous Deployment?</summary><br><b>
Both encapsulate the same process of deploying the changes which were compiled and/or tested in the CI pipelines.<br>
The difference between the two is that Continuous Delivery isn't fully automated process as opposed to Continuous Deployment where every change that is tested in the process is eventually deployed to production. In continuous delivery someone is either approving the deployment process or the deployment process is based on constraints and conditions (like time constraint of deploying every week/month/...)
</b></details>
<details>
<summary>What CI/CD best practices are you familiar with? Or what do you consider as CI/CD best practice?</summary><br><b>
* Commit and test often.
* Testing/Staging environment should be a clone of production environment.
* Clean up your environments (e.g. your CI/CD pipelines may create a lot of resources. They should also take care of cleaning up everything they create)
* The CI/CD pipelines should provide the same results when executed locally or remotely
* Treat CI/CD as another application in your organization. Not as a glue code.
* On demand environments instead of pre-allocated resources for CI/CD purposes
* Stages/Steps/Tasks of pipelines should be shared between applications or microservices (don't re-invent common tasks like "cloning a project")
</b></details>
<details>
<summary>You are given a pipeline and a pool with 3 workers: virtual machine, baremetal and a container. How will you decide on which one of them to run the pipeline?</summary><br><b>
</b></details>
<details>
<summary>Where do you store CI/CD pipelines? Why?</summary><br><b>
There are multiple approaches as to where to store the CI/CD pipeline definitions:
1. App Repository - store them in the same repository of the application they are building or testing (perhaps the most popular one)
2. Central Repository - store all organization's/project's CI/CD pipelines in one separate repository (perhaps the best approach when multiple teams test the same set of projects and they end up having many pipelines)
3. CI repo for every app repo - you separate CI related code from app code but you don't put everything in one place (perhaps the worst option due to the maintenance)
4. The platform where the CI/CD pipelines are being executed (e.g. Kubernetes Cluster in case of Tekton/OpenShift Pipelines).
</b></details>
<details>
<summary>How do you perform plan capacity for your CI/CD resources? (e.g. servers, storage, etc.)</summary><br><b>
</b></details>
<details>
<summary>How would you structure/implement CD for an application which depends on several other applications?</summary><br><b>
</b></details>
<details>
<summary>How do you measure your CI/CD quality? Are there any metrics or KPIs you are using for measuring the quality?</summary><br><b>
</b></details>
#### CI/CD - Jenkins
<details>
<summary>What is Jenkins? What have you used it for?</summary><br><b>
Jenkins is an open source automation tool written in Java with plugins built for Continuous Integration purpose. Jenkins is used to build and test your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.
Jenkins integrates development life-cycle processes of all kinds, including build, document, test, package, stage, deploy, static analysis and much more.
</b></details>
<details>
<summary>What are the advantages of Jenkins over its competitors? Can you compare it to one of the following systems?
* Travis
* Bamboo
* Teamcity
* CircleCI</summary><br><b>
</b></details>
<details>
<summary>What are the limitations or disadvantages of Jenkins?</summary><br><b>
This might be considered to be an opinionated answer:
* Old fashioned dashboards with not many options to customize it
* Containers readiness (this has improved with Jenkins X)
* By itself, it doesn't have many features. On the other hand, there many plugins created by the community to expand its abilities
* Managing Jenkins and its pipelines as a code can be one hell of a nightmare
</b></details>
<details>
<summary>Explain the following:
- Job
- Build
- Plugin
- Node or Worker
- Executor</summary><br><b>
- Job is an automation definition = what and where to execute once the user clicks on "build"
- Build is a running instance of a job. You can have one or more builds at any given point of time (unless limited by configuration)
- A worker is the machine/instance on which the build is running. When a build starts, it "acquires" a worker out of a pool to run on it.
- An executor is variable of the worker, defining how many builds can run on that worker in parallel. An executor value of 3 means, that 3 builds can run at any point on that executor (not necessarily of the same job. Any builds)
</b></details>
<details>
<summary>What plugins have you used in Jenkins?</summary><br><b>
</b></details>
<details>
<summary>Have you used Jenkins for CI or CD processes? Can you describe them?</summary><br><b>
</b></details>
<details>
<summary>What type of jobs are there? Which types have you used?</summary><br><b>
</b></details>
<details>
<summary>How did you report build results to users? What ways are there to report the results?</summary><br><b>
You can report via:
* Emails
* Messaging apps
* Dashboards
Each has its own disadvantages and advantages. Emails for example, if sent too often, can be eventually disregarded or ignored.
</b></details>
<details>
<summary>You need to run unit tests every time a change submitted to a given project. Describe in details how your pipeline would look like and what will be executed in each stage</summary><br><b>
The pipelines will have multiple stages:
* Clone the project
* Install test dependencies (for example, if I need tox package to run the tests, I will install it in this stage)
* Run unit tests
* (Optional) report results (For example an email to the users)
* Archive the relevant logs/files
</b></details>
<details>
<summary>How to secure Jenkins?</summary><br><b>
[Jenkins documentation](https://www.jenkins.io/doc/book/security/securing-jenkins/) provides some basic intro for securing your Jenkins server.
</b></details>
<details>
<summary>Describe how do you add new nodes (agents) to Jenkins</summary><br><b>
You can describe the UI way to add new nodes but better to explain how to do in a way that scales like a script or using dynamic source for nodes like one of the existing clouds.
</b></details>
<details>
<summary>How to acquire multiple nodes for one specific build?</summary><br><b>
</b></details>
<details>
<summary>Whenever a build fails, you would like to notify the team owning the job regarding the failure and provide failure reason. How would you do that?</summary><br><b>
</b></details>
<details>
<summary>There are four teams in your organization. How to prioritize the builds of each team? So the jobs of team x will always run before team y for example</summary><br><b>
</b></details>
<details>
<summary>If you are managing a dozen of jobs, you can probably use the Jenkins UI. But how do you manage the creation and deletion of hundreds of jobs every week/month?</summary><br><b>
</b></details>
<details>
<summary>What are some of Jenkins limitations?</summary><br><b>
* Testing cross-dependencies (changes from multiple projects together)
* Starting builds from any stage (although Cloudbees implemented something called checkpoints)
</b></details>
<details>
<summary>What is the different between a scripted pipeline to declarative pipeline? Which type are you using?</summary><br><b>
</b></details>
<details>
<summary>How would you implement an option of a starting a build from a certain stage and not from the beginning?</summary><br><b>
</b></details>
<details>
<summary>Do you have experience with developing a Jenkins plugin? Can you describe this experience?</summary><br><b>
</b></details>
<details>
<summary>Have you written Jenkins scripts? If yes, what for and how they work?</summary><br><b>
</b></details>
#### CI/CD - GitHub Actions
<details>
<summary>What is a Workflow in GitHub Actions?</summary><br><b>
A YAML file that defines the automation actions and instructions to execute upon a specific event.<br>
The file is placed in the repository itself.
A Workflow can be anything - running tests, compiling code, building packages, ...
</b></details>
<details>
<summary>What is a Runner in GitHub Actions?</summary><br><b>
A workflow has to be executed somewhere. The environment where the workflow is executed is called Runner.<br>
A Runner can be an on-premise host or GitHub hoste
</b></details>
<details>
<summary>What is a Job in GitHub Actions?</summary><br><b>
A job is a series of steps which are executed on the same runner/environment.<br>
A workflow must include at least one job.
</b></details>
<details>
<summary>What is an Action in GitHub Actions?</summary><br><b>
An action is the smallest unit in a workflow. It includes the commands to execute as part of the job.
</b></details>
<details>
<summary>In GitHub Actions workflow, what the 'on' attribute/directive is used for?</summary><br><b>
Specify upon which events the workflow will be triggered.<br>
For example, you might configure the workflow to trigger every time a changed is pushed to the repository.
</b></details>
<details>
<summary>True or False? In Github Actions, jobs are executed in parallel by deafult</summary><br><b>
True
</b></details>
<details>
<summary>How to create dependencies between jobs so one job runs after another?</summary><br><b>
Using the "needs" attribute/directive.
```
jobs:
job1:
job2:
needs: job1
```
In the above example, job1 must complete successfully before job2 runs
</b></details>
<details>
<summary>How to add a Workflow to a repository?</summary><br><b>
CLI:
1. Create the directory `.github/workflows` in the repository
2. Add a YAML file
UI:
1. In the repository page, click on "Actions"
2. Choose workflow and click on "Set up this workflow"
</b></details>
#### Zuul
<details>
<summary>In Zuul, What are the <code>check</code> pipelines?</summary><br><b>
`check` pipeline are triggered when a patch is uploaded to a code review system (e.g. Gerrit).<br>
</b></details>
<details>
<summary>In Zuul, What are the <code>gate</code> pipelines?</summary><br><b>
`gate` pipeline are triggered when a code reviewer approves the change in a code review system (e.g. Gerrit)
</b></details>
<details>
<summary>True or False? <code>gate</code> pipelines run after the <code>check</code> pipelines</summary><br><b>
True. `check` pipeline run when the change is uploaded, while the `gate` pipelines run when the change is approved by a reviewer
</b></details>

View File

@@ -0,0 +1,26 @@
## CI for Open Source Project
1. Choose an open source project from Github and fork it
2. Create a CI pipeline/workflow for the project you forked
3. The CI pipeline/workflow will include anything that is relevant to the project you forked. For example:
* If it's a Python project, you will run PEP8
* If the project has unit tests directory, you will run these unit tests as part of the CI
4. In a separate file, describe what is running as part of the CI and why you chose to include it. You can also describe any thoughts, dilemmas, challenge you had
### Bonus
Containerize the app of the project you forked using any container engine you would like (e.g. Docker, Podman).<br>
Once you successfully ran the application in a container, submit the Dockerfile to the original project (but be prepared that the maintainer might not need/want that).
### Suggestions for Projects
The following is a list of projects without CI (at least at the moment):
Note: I wrote a script to find these (except the first project on the list, of course) based on some parameters in case you wonder why these projects specifically are listed.
* [This one](https://github.com/bregman-arie/devops-exercises) - We don't have CI! help! :)
* [image retrieval platform](https://github.com/skx6/image_retrieval_platform)
* [FollowSpot](https://github.com/jenbrissman/FollowSpot)
* [Pyrin](https://github.com/mononobi/pyrin)
* [food-detection-yolov5](https://github.com/lannguyen0910/food-detection-yolov5)
* [Lifely](https://github.com/sagnik1511/Lifely)

View File

@@ -0,0 +1,5 @@
## Deploy to Kubernetes
* Write a pipeline that will deploy an "hello world" web app to Kubernete
* The CI/CD system (where the pipeline resides) and the Kubernetes cluster should be on separate systems
* The web app should be accessible remotely and only with HTTPS

View File

@@ -0,0 +1,14 @@
### Jenkins - Remove Jobs
#### Objective
Learn how to write a Jenkins script that interacts with builds by removing builds older than X days.
#### Instructions
1. Pick up (or create) a job which has builds older than X days
2. Write a script to remove only the builds that are older than X days
#### Hints
X can be anything. For example, remove builds that are older than 3 days. Just make sure that you don't simply remove all the builds (since that's different from the objective).

View File

@@ -0,0 +1,10 @@
### Jenkins - Remove Jobs
#### Objective
Learn how to write a Jenkins script to remove Jenkins jobs
#### Instructions
1. Create three jobs called: test-job, test2-job and prod-job
2. Write a script to remove all the jobs that include the string "test"

View File

@@ -0,0 +1,45 @@
pipeline {
agent any
stages {
stage('Checkout Source') {
steps {
git url:'https://github.com/<GITHUB_USERNAME>/<YOUR_WEB_APP_REPO>.git',
// credentialsId: 'creds_github',
branch:'master'
}
}
stage("Build image") {
steps {
script {
myapp = docker.build("<YOUR_DOCKER_USERNAME>/helloworld:${env.BUILD_ID}")
}
}
}
stage("Push image") {
steps {
script {
docker.withRegistry('https://registry.hub.docker.com', 'dockerhub') {
myapp.push("latest")
myapp.push("${env.BUILD_ID}")
}
}
}
}
stage('Deploy App') {
steps {
script {
sh 'ansible-playbook deploy.yml'
}
}
}
}
}

View File

@@ -0,0 +1,15 @@
## Deploy to Kubernetes
Note: this exercise can be solved in various ways. The solution described here is just one possible way.
1. Install Jenkins on one system (follow up the standard Jenkins installation procedure)
2. Deploy Kubernetes on a remote host (minikube can be an easy way to achieve it)
3. Create a simple web app or [page](html)
4. Create Kubernetes [resoruces](helloworld.yml) - Deployment, Service and Ingress (for HTTPS access)
5. Create an [Ansible inventory](inventory) and insert the address of the Kubernetes cluster
6. Write [Ansible playbook](deploy.yml) to deploy the Kubernetes resources and also generate
7. Create a [pipeline](Jenkinsfile)
8. Run the pipeline :)
9. Try to access the web app remotely

View File

@@ -0,0 +1,42 @@
- name: Apply Kubernetes YAMLs
hosts: kubernetes
tasks:
- name: Ensure SSL related directories exist
file:
path: "{{ item }}"
state: directory
loop:
- "/etc/ssl/crt"
- "/etc/ssl/csr"
- "/etc/ssl/private"
- name: Generate an OpenSSL private key.
openssl_privatekey:
path: /etc/ssl/private/privkey.pem
- name: generate openssl certficate signing requests
openssl_csr:
path: /etc/ssl/csr/hello-world.app.csr
privatekey_path: /etc/ssl/private/privkey.pem
common_name: hello-world.app
- name: Generate a Self Signed OpenSSL certificate
openssl_certificate:
path: /etc/ssl/crt/hello-world.app.crt
privatekey_path: /etc/ssl/private/privkey.pem
csr_path: /etc/ssl/csr/hello-world.app.csr
provider: selfsigned
- name: Create k8s secret
command: "kubectl create secret tls tls-secret --cert=/etc/ssl/crt/hello-world.app.crt --key=/etc/ssl/private/privkey.pem"
register: result
failed_when:
- result.rc == 2
- name: Deploy web app
k8s:
state: present
definition: "{{ lookup('file', './helloworld.yml') }}"
kubeconfig: '/home/abregman/.kube/config'
namespace: 'default'
wait: true

View File

@@ -0,0 +1,65 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-blue-whale
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-app
version: blue
template:
metadata:
name: hello-blue-whale-pod
labels:
app: hello-world-app
version: blue
spec:
containers:
- name: hello-whale-container
image: abregman2/helloworld:latest
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world-app
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: hello-world-app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
cert-manager.io/cluster-issuer: selfsigned-issuer
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- hello-world.app
secretName: shhh
rules:
- host: hello-world.app
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello-world
port:
number: 80

View File

@@ -0,0 +1,427 @@
/*! normalize.css v3.0.2 | MIT License | git.io/normalize */
/**
* 1. Set default font family to sans-serif.
* 2. Prevent iOS text size adjust after orientation change, without disabling
* user zoom.
*/
html {
font-family: sans-serif; /* 1 */
-ms-text-size-adjust: 100%; /* 2 */
-webkit-text-size-adjust: 100%; /* 2 */
}
/**
* Remove default margin.
*/
body {
margin: 0;
}
/* HTML5 display definitions
========================================================================== */
/**
* Correct `block` display not defined for any HTML5 element in IE 8/9.
* Correct `block` display not defined for `details` or `summary` in IE 10/11
* and Firefox.
* Correct `block` display not defined for `main` in IE 11.
*/
article,
aside,
details,
figcaption,
figure,
footer,
header,
hgroup,
main,
menu,
nav,
section,
summary {
display: block;
}
/**
* 1. Correct `inline-block` display not defined in IE 8/9.
* 2. Normalize vertical alignment of `progress` in Chrome, Firefox, and Opera.
*/
audio,
canvas,
progress,
video {
display: inline-block; /* 1 */
vertical-align: baseline; /* 2 */
}
/**
* Prevent modern browsers from displaying `audio` without controls.
* Remove excess height in iOS 5 devices.
*/
audio:not([controls]) {
display: none;
height: 0;
}
/**
* Address `[hidden]` styling not present in IE 8/9/10.
* Hide the `template` element in IE 8/9/11, Safari, and Firefox < 22.
*/
[hidden],
template {
display: none;
}
/* Links
========================================================================== */
/**
* Remove the gray background color from active links in IE 10.
*/
a {
background-color: transparent;
}
/**
* Improve readability when focused and also mouse hovered in all browsers.
*/
a:active,
a:hover {
outline: 0;
}
/* Text-level semantics
========================================================================== */
/**
* Address styling not present in IE 8/9/10/11, Safari, and Chrome.
*/
abbr[title] {
border-bottom: 1px dotted;
}
/**
* Address style set to `bolder` in Firefox 4+, Safari, and Chrome.
*/
b,
strong {
font-weight: bold;
}
/**
* Address styling not present in Safari and Chrome.
*/
dfn {
font-style: italic;
}
/**
* Address variable `h1` font-size and margin within `section` and `article`
* contexts in Firefox 4+, Safari, and Chrome.
*/
h1 {
font-size: 2em;
margin: 0.67em 0;
}
/**
* Address styling not present in IE 8/9.
*/
mark {
background: #ff0;
color: #000;
}
/**
* Address inconsistent and variable font size in all browsers.
*/
small {
font-size: 80%;
}
/**
* Prevent `sub` and `sup` affecting `line-height` in all browsers.
*/
sub,
sup {
font-size: 75%;
line-height: 0;
position: relative;
vertical-align: baseline;
}
sup {
top: -0.5em;
}
sub {
bottom: -0.25em;
}
/* Embedded content
========================================================================== */
/**
* Remove border when inside `a` element in IE 8/9/10.
*/
img {
border: 0;
}
/**
* Correct overflow not hidden in IE 9/10/11.
*/
svg:not(:root) {
overflow: hidden;
}
/* Grouping content
========================================================================== */
/**
* Address margin not present in IE 8/9 and Safari.
*/
figure {
margin: 1em 40px;
}
/**
* Address differences between Firefox and other browsers.
*/
hr {
-moz-box-sizing: content-box;
box-sizing: content-box;
height: 0;
}
/**
* Contain overflow in all browsers.
*/
pre {
overflow: auto;
}
/**
* Address odd `em`-unit font size rendering in all browsers.
*/
code,
kbd,
pre,
samp {
font-family: monospace, monospace;
font-size: 1em;
}
/* Forms
========================================================================== */
/**
* Known limitation: by default, Chrome and Safari on OS X allow very limited
* styling of `select`, unless a `border` property is set.
*/
/**
* 1. Correct color not being inherited.
* Known issue: affects color of disabled elements.
* 2. Correct font properties not being inherited.
* 3. Address margins set differently in Firefox 4+, Safari, and Chrome.
*/
button,
input,
optgroup,
select,
textarea {
color: inherit; /* 1 */
font: inherit; /* 2 */
margin: 0; /* 3 */
}
/**
* Address `overflow` set to `hidden` in IE 8/9/10/11.
*/
button {
overflow: visible;
}
/**
* Address inconsistent `text-transform` inheritance for `button` and `select`.
* All other form control elements do not inherit `text-transform` values.
* Correct `button` style inheritance in Firefox, IE 8/9/10/11, and Opera.
* Correct `select` style inheritance in Firefox.
*/
button,
select {
text-transform: none;
}
/**
* 1. Avoid the WebKit bug in Android 4.0.* where (2) destroys native `audio`
* and `video` controls.
* 2. Correct inability to style clickable `input` types in iOS.
* 3. Improve usability and consistency of cursor style between image-type
* `input` and others.
*/
button,
html input[type="button"], /* 1 */
input[type="reset"],
input[type="submit"] {
-webkit-appearance: button; /* 2 */
cursor: pointer; /* 3 */
}
/**
* Re-set default cursor for disabled elements.
*/
button[disabled],
html input[disabled] {
cursor: default;
}
/**
* Remove inner padding and border in Firefox 4+.
*/
button::-moz-focus-inner,
input::-moz-focus-inner {
border: 0;
padding: 0;
}
/**
* Address Firefox 4+ setting `line-height` on `input` using `!important` in
* the UA stylesheet.
*/
input {
line-height: normal;
}
/**
* It's recommended that you don't attempt to style these elements.
* Firefox's implementation doesn't respect box-sizing, padding, or width.
*
* 1. Address box sizing set to `content-box` in IE 8/9/10.
* 2. Remove excess padding in IE 8/9/10.
*/
input[type="checkbox"],
input[type="radio"] {
box-sizing: border-box; /* 1 */
padding: 0; /* 2 */
}
/**
* Fix the cursor style for Chrome's increment/decrement buttons. For certain
* `font-size` values of the `input`, it causes the cursor style of the
* decrement button to change from `default` to `text`.
*/
input[type="number"]::-webkit-inner-spin-button,
input[type="number"]::-webkit-outer-spin-button {
height: auto;
}
/**
* 1. Address `appearance` set to `searchfield` in Safari and Chrome.
* 2. Address `box-sizing` set to `border-box` in Safari and Chrome
* (include `-moz` to future-proof).
*/
input[type="search"] {
-webkit-appearance: textfield; /* 1 */
-moz-box-sizing: content-box;
-webkit-box-sizing: content-box; /* 2 */
box-sizing: content-box;
}
/**
* Remove inner padding and search cancel button in Safari and Chrome on OS X.
* Safari (but not Chrome) clips the cancel button when the search input has
* padding (and `textfield` appearance).
*/
input[type="search"]::-webkit-search-cancel-button,
input[type="search"]::-webkit-search-decoration {
-webkit-appearance: none;
}
/**
* Define consistent border, margin, and padding.
*/
fieldset {
border: 1px solid #c0c0c0;
margin: 0 2px;
padding: 0.35em 0.625em 0.75em;
}
/**
* 1. Correct `color` not being inherited in IE 8/9/10/11.
* 2. Remove padding so people aren't caught out if they zero out fieldsets.
*/
legend {
border: 0; /* 1 */
padding: 0; /* 2 */
}
/**
* Remove default vertical scrollbar in IE 8/9/10/11.
*/
textarea {
overflow: auto;
}
/**
* Don't inherit the `font-weight` (applied by a rule above).
* NOTE: the default cannot safely be changed in Chrome and Safari on OS X.
*/
optgroup {
font-weight: bold;
}
/* Tables
========================================================================== */
/**
* Remove most spacing between table cells.
*/
table {
border-collapse: collapse;
border-spacing: 0;
}
td,
th {
padding: 0;
}

View File

@@ -0,0 +1,418 @@
/*
* Skeleton V2.0.4
* Copyright 2014, Dave Gamache
* www.getskeleton.com
* Free to use under the MIT license.
* http://www.opensource.org/licenses/mit-license.php
* 12/29/2014
*/
/* Table of contents
- Grid
- Base Styles
- Typography
- Links
- Buttons
- Forms
- Lists
- Code
- Tables
- Spacing
- Utilities
- Clearing
- Media Queries
*/
/* Grid
*/
.container {
position: relative;
width: 100%;
max-width: 960px;
margin: 0 auto;
padding: 0 20px;
box-sizing: border-box; }
.column,
.columns {
width: 100%;
float: left;
box-sizing: border-box; }
/* For devices larger than 400px */
@media (min-width: 400px) {
.container {
width: 85%;
padding: 0; }
}
/* For devices larger than 550px */
@media (min-width: 550px) {
.container {
width: 80%; }
.column,
.columns {
margin-left: 4%; }
.column:first-child,
.columns:first-child {
margin-left: 0; }
.one.column,
.one.columns { width: 4.66666666667%; }
.two.columns { width: 13.3333333333%; }
.three.columns { width: 22%; }
.four.columns { width: 30.6666666667%; }
.five.columns { width: 39.3333333333%; }
.six.columns { width: 48%; }
.seven.columns { width: 56.6666666667%; }
.eight.columns { width: 65.3333333333%; }
.nine.columns { width: 74.0%; }
.ten.columns { width: 82.6666666667%; }
.eleven.columns { width: 91.3333333333%; }
.twelve.columns { width: 100%; margin-left: 0; }
.one-third.column { width: 30.6666666667%; }
.two-thirds.column { width: 65.3333333333%; }
.one-half.column { width: 48%; }
/* Offsets */
.offset-by-one.column,
.offset-by-one.columns { margin-left: 8.66666666667%; }
.offset-by-two.column,
.offset-by-two.columns { margin-left: 17.3333333333%; }
.offset-by-three.column,
.offset-by-three.columns { margin-left: 26%; }
.offset-by-four.column,
.offset-by-four.columns { margin-left: 34.6666666667%; }
.offset-by-five.column,
.offset-by-five.columns { margin-left: 43.3333333333%; }
.offset-by-six.column,
.offset-by-six.columns { margin-left: 52%; }
.offset-by-seven.column,
.offset-by-seven.columns { margin-left: 60.6666666667%; }
.offset-by-eight.column,
.offset-by-eight.columns { margin-left: 69.3333333333%; }
.offset-by-nine.column,
.offset-by-nine.columns { margin-left: 78.0%; }
.offset-by-ten.column,
.offset-by-ten.columns { margin-left: 86.6666666667%; }
.offset-by-eleven.column,
.offset-by-eleven.columns { margin-left: 95.3333333333%; }
.offset-by-one-third.column,
.offset-by-one-third.columns { margin-left: 34.6666666667%; }
.offset-by-two-thirds.column,
.offset-by-two-thirds.columns { margin-left: 69.3333333333%; }
.offset-by-one-half.column,
.offset-by-one-half.columns { margin-left: 52%; }
}
/* Base Styles
*/
/* NOTE
html is set to 62.5% so that all the REM measurements throughout Skeleton
are based on 10px sizing. So basically 1.5rem = 15px :) */
html {
font-size: 62.5%; }
body {
font-size: 1.5em; /* currently ems cause chrome bug misinterpreting rems on body element */
line-height: 1.6;
font-weight: 400;
font-family: "Raleway", "HelveticaNeue", "Helvetica Neue", Helvetica, Arial, sans-serif;
color: #222; }
/* Typography
*/
h1, h2, h3, h4, h5, h6 {
margin-top: 0;
margin-bottom: 2rem;
font-weight: 300; }
h1 { font-size: 4.0rem; line-height: 1.2; letter-spacing: -.1rem;}
h2 { font-size: 3.6rem; line-height: 1.25; letter-spacing: -.1rem; }
h3 { font-size: 3.0rem; line-height: 1.3; letter-spacing: -.1rem; }
h4 { font-size: 2.4rem; line-height: 1.35; letter-spacing: -.08rem; }
h5 { font-size: 1.8rem; line-height: 1.5; letter-spacing: -.05rem; }
h6 { font-size: 1.5rem; line-height: 1.6; letter-spacing: 0; }
/* Larger than phablet */
@media (min-width: 550px) {
h1 { font-size: 5.0rem; }
h2 { font-size: 4.2rem; }
h3 { font-size: 3.6rem; }
h4 { font-size: 3.0rem; }
h5 { font-size: 2.4rem; }
h6 { font-size: 1.5rem; }
}
p {
margin-top: 0; }
/* Links
*/
a {
color: #1EAEDB; }
a:hover {
color: #0FA0CE; }
/* Buttons
*/
.button,
button,
input[type="submit"],
input[type="reset"],
input[type="button"] {
display: inline-block;
height: 38px;
padding: 0 30px;
color: #555;
text-align: center;
font-size: 11px;
font-weight: 600;
line-height: 38px;
letter-spacing: .1rem;
text-transform: uppercase;
text-decoration: none;
white-space: nowrap;
background-color: transparent;
border-radius: 4px;
border: 1px solid #bbb;
cursor: pointer;
box-sizing: border-box; }
.button:hover,
button:hover,
input[type="submit"]:hover,
input[type="reset"]:hover,
input[type="button"]:hover,
.button:focus,
button:focus,
input[type="submit"]:focus,
input[type="reset"]:focus,
input[type="button"]:focus {
color: #333;
border-color: #888;
outline: 0; }
.button.button-primary,
button.button-primary,
input[type="submit"].button-primary,
input[type="reset"].button-primary,
input[type="button"].button-primary {
color: #FFF;
background-color: #33C3F0;
border-color: #33C3F0; }
.button.button-primary:hover,
button.button-primary:hover,
input[type="submit"].button-primary:hover,
input[type="reset"].button-primary:hover,
input[type="button"].button-primary:hover,
.button.button-primary:focus,
button.button-primary:focus,
input[type="submit"].button-primary:focus,
input[type="reset"].button-primary:focus,
input[type="button"].button-primary:focus {
color: #FFF;
background-color: #1EAEDB;
border-color: #1EAEDB; }
/* Forms
*/
input[type="email"],
input[type="number"],
input[type="search"],
input[type="text"],
input[type="tel"],
input[type="url"],
input[type="password"],
textarea,
select {
height: 38px;
padding: 6px 10px; /* The 6px vertically centers text on FF, ignored by Webkit */
background-color: #fff;
border: 1px solid #D1D1D1;
border-radius: 4px;
box-shadow: none;
box-sizing: border-box; }
/* Removes awkward default styles on some inputs for iOS */
input[type="email"],
input[type="number"],
input[type="search"],
input[type="text"],
input[type="tel"],
input[type="url"],
input[type="password"],
textarea {
-webkit-appearance: none;
-moz-appearance: none;
appearance: none; }
textarea {
min-height: 65px;
padding-top: 6px;
padding-bottom: 6px; }
input[type="email"]:focus,
input[type="number"]:focus,
input[type="search"]:focus,
input[type="text"]:focus,
input[type="tel"]:focus,
input[type="url"]:focus,
input[type="password"]:focus,
textarea:focus,
select:focus {
border: 1px solid #33C3F0;
outline: 0; }
label,
legend {
display: block;
margin-bottom: .5rem;
font-weight: 600; }
fieldset {
padding: 0;
border-width: 0; }
input[type="checkbox"],
input[type="radio"] {
display: inline; }
label > .label-body {
display: inline-block;
margin-left: .5rem;
font-weight: normal; }
/* Lists
*/
ul {
list-style: circle inside; }
ol {
list-style: decimal inside; }
ol, ul {
padding-left: 0;
margin-top: 0; }
ul ul,
ul ol,
ol ol,
ol ul {
margin: 1.5rem 0 1.5rem 3rem;
font-size: 90%; }
li {
margin-bottom: 1rem; }
/* Code
*/
code {
padding: .2rem .5rem;
margin: 0 .2rem;
font-size: 90%;
white-space: nowrap;
background: #F1F1F1;
border: 1px solid #E1E1E1;
border-radius: 4px; }
pre > code {
display: block;
padding: 1rem 1.5rem;
white-space: pre; }
/* Tables
*/
th,
td {
padding: 12px 15px;
text-align: left;
border-bottom: 1px solid #E1E1E1; }
th:first-child,
td:first-child {
padding-left: 0; }
th:last-child,
td:last-child {
padding-right: 0; }
/* Spacing
*/
button,
.button {
margin-bottom: 1rem; }
input,
textarea,
select,
fieldset {
margin-bottom: 1.5rem; }
pre,
blockquote,
dl,
figure,
table,
p,
ul,
ol,
form {
margin-bottom: 2.5rem; }
/* Utilities
*/
.u-full-width {
width: 100%;
box-sizing: border-box; }
.u-max-full-width {
max-width: 100%;
box-sizing: border-box; }
.u-pull-right {
float: right; }
.u-pull-left {
float: left; }
/* Misc
*/
hr {
margin-top: 3rem;
margin-bottom: 3.5rem;
border-width: 0;
border-top: 1px solid #E1E1E1; }
/* Clearing
*/
/* Self Clearing Goodness */
.container:after,
.row:after,
.u-cf {
content: "";
display: table;
clear: both; }
/* Media Queries
*/
/*
Note: The best way to structure the use of media queries is to create the queries
near the relevant code. For example, if you wanted to change the styles for buttons
on small devices, paste the mobile query code up in the buttons section and style it
there.
*/
/* Larger than mobile */
@media (min-width: 400px) {}
/* Larger than phablet (also point when grid becomes active) */
@media (min-width: 550px) {}
/* Larger than tablet */
@media (min-width: 750px) {}
/* Larger than desktop */
@media (min-width: 1000px) {}
/* Larger than Desktop HD */
@media (min-width: 1200px) {}

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

View File

@@ -0,0 +1,45 @@
<!DOCTYPE html>
<html lang="en">
<head>
<!-- Basic Page Needs
-->
<meta charset="utf-8">
<title>Hello World :)</title>
<meta name="description" content="">
<meta name="author" content="">
<!-- Mobile Specific Metas
-->
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- FONT
-->
<link href="//fonts.googleapis.com/css?family=Raleway:400,300,600" rel="stylesheet" type="text/css">
<!-- CSS
-->
<link rel="stylesheet" href="css/normalize.css">
<link rel="stylesheet" href="css/skeleton.css">
<!-- Favicon
-->
<link rel="icon" type="image/png" href="images/favicon.png">
</head>
<body>
<!-- Primary Page Layout
-->
<div class="container">
<div class="row">
<div class="one-half column" style="margin-top: 25%">
<h1 style="color:red"><b>Hello World :)</b></h1>
</div>
</div>
</div>
<!-- End Document
-->
</body>
</html>

View File

@@ -0,0 +1,2 @@
[kubernetes]
x.x.x.x

View File

@@ -0,0 +1,16 @@
def removeOldBuilds(buildDirectory, days = 14) {
def wp = new File("${buildDirectory}")
def currentTime = new Date()
def backTime = currentTime - days
wp.list().each { fileName ->
folder = new File("${buildDirectory}/${fileName}")
if (folder.isDirectory()) {
def timeStamp = new Date(folder.lastModified())
if (timeStamp.before(backTime)) {
folder.delete()
}
}
}
}

View File

@@ -0,0 +1,6 @@
def jobs = Jenkins.instance.items.findAll { job -> job.name =~ /"test"/ }
jobs.each { job ->
println job.name
//job.delete()
}

108
topics/cloud/README.md Normal file
View File

@@ -0,0 +1,108 @@
## Cloud
<details>
<summary>What is Cloud Computing? What is a Cloud Provider?</summary><br><b>
Cloud computing refers to the delivery of on-demand computing services
over the internet on a pay-as-you-go basis.
In simple words, Cloud computing is a service that lets you use any computing
service such as a server, storage, networking, databases, and intelligence,
right through your browser without owning anything. You can do anything you
can think of unless it doesnt require you to stay close to your hardware.
Cloud service providers are companies that establish public clouds, manage private clouds, or offer on-demand cloud computing components (also known as cloud computing services) like Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service(SaaS). Cloud services can reduce business process costs when compared to on-premise IT.
</b></details>
<details>
<summary>What are the advantages of cloud computing? Mention at least 3 advantages</summary><br><b>
* Pay as you go: you are paying only for what you are using. No upfront payments and payment stops when resources are no longer used.
* Scalable: resources are scaled down or up based on demand
* High availability: resources and applications provide seamless experience, even when some services are down
* Disaster recovery
</b></details>
<details>
<summary>True or False? Cloud computing is a consumption-based model (users only pay for for resources they use)</summary><br><b>
True
</b></details>
<details>
<summary>What types of Cloud Computing services are there?</summary><br><b>
IAAS - Infrastructure as a Service
PAAS - Platform as a Service
SAAS - Software as a Service
</b></details>
<details>
<summary>Explain each of the following and give an example:
* IAAS
* PAAS
* SAAS</summary><br><b>
* IAAS - Users have control over complete Operating System and don't need to worry about the physical resources, which is managed by Cloud Service Provider.
* PAAS - CLoud Service Provider takes care of Operating System, Middlewares and users only need to focus on our Data and Application.
* SAAS - A cloud based method to provide software to users, software logics running on cloud, can be run on-premises or managed by Cloud Service Provider.
</b></details>
<details>
<summary>What types of clouds (or cloud deployments) are there?</summary><br><b>
* Public - Cloud services sharing computing resources among multiple customers
* Private - Cloud services having computing resources limited to specific customer or organization, managed by third party or organizations itself
* Hybrid - Combination of public and private clouds
</b></details>
<details>
<summary>What are the differences between Cloud Providers and On-Premise solution?</summary><br><b>
In cloud providers, someone else owns and manages the hardware, hire the relevant infrastructure teams and pays for real-estate (for both hardware and people). You can focus on your business.
In On-Premise solution, it's quite the opposite. You need to take care of hardware, infrastructure teams and pay for everything which can be quite expensive. On the other hand it's tailored to your needs.
</b></details>
<details>
<summary>What is Serverless Computing?</summary><br><b>
The main idea behind serverless computing is that you don't need to manage the creation and configuration of server. All you need to focus on is splitting your app into multiple functions which will be triggered by some actions.
It's important to note that:
* Serverless Computing is still using servers. So saying there are no servers in serverless computing is completely wrong
* Serverless Computing allows you to have a different paying model. You basically pay only when your functions are running and not when the VM or containers are running as in other payment models
</b></details>
<details>
<summary>Can we replace any type of computing on servers with serverless?</summary><br><b>
</b></details>
<details>
<summary>Is there a difference between managed service to SaaS or is it the same thing?</summary><br><b>
</b></details>
<details>
<summary>What is auto scaling?</summary><br><b>
AWS definition: "AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost"
Read more about auto scaling [here](https://aws.amazon.com/autoscaling)
</b></details>
<details>
<summary>True or False? Auto Scaling is about adding resources (such as instances) and not about removing resource</summary><br><b>
False. Auto scaling adjusts capacity and this can mean removing some resources based on usage and performances.
</b></details>
#### Cloud - Security
<details>
<summary>How to secure instances in the cloud?</summary><br><b>
* Instance should have minimal permissions needed. You don't want an instance-level incident to become an account-level incident
* Instances should be accessed through load balancers or bastion hosts. In other words, they should be off the internet (in a private subnet behind a NAT).
* Using latest OS images with your instances (or at least apply latest patches)
</b></details>

Some files were not shown because too many files have changed in this diff Show More