Compare commits
No commits in common. "895ac0f15530d3047f95d6dfac83b04cdbf0d1c4" and "672a1bc9e77c9412ec66465999f938b26e6c7716" have entirely different histories.
895ac0f155
...
672a1bc9e7
2
.github/workflows/ci_workflow.yml
vendored
2
.github/workflows/ci_workflow.yml
vendored
@ -14,5 +14,5 @@ jobs:
|
||||
- name: Give executable permissions to run_ci.sh inside the scripts directory
|
||||
run: chmod a+x scripts/run_ci.sh
|
||||
- name: Run the ci script inside the scripts folder
|
||||
run: bash scripts/run_ci.sh
|
||||
run: sh scripts/run_ci.sh
|
||||
shell: bash
|
4
LICENSE
4
LICENSE
@ -3,7 +3,7 @@ THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE COMMONS
|
||||
BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE TO BE BOUND BY THE TERMS OF THIS LICENSE. TO THE EXTENT THIS LICENSE MAY BE CONSIDERED TO BE A CONTRACT, THE LICENSOR GRANTS YOU THE RIGHTS CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND CONDITIONS.
|
||||
|
||||
1. Definitions
|
||||
"Adaptation" means a work based upon the Work, or upon the Work and other pre-existing works, such as a translation, adaptation, derivative work, arrangement of music or other alterations of a literary or artistic work, or phonogram or performance and includes cinematographic adaptations or any other form in which the Work may be recast, transformed, or adapted including in any form recognizably derived from the original, except that a work that constitutes a Collection will not be considered an Adaptation for the purpose of this License. For the avoidance of doubt, where the Work is a musical work, performance or phonogram, the synchronization of the Work in timed-relation with a moving image ("syncing") will be considered an Adaptation for the purpose of this License.
|
||||
"Adaptation" means a work based upon the Work, or upon the Work and other pre-existing works, such as a translation, adaptation, derivative work, arrangement of music or other alterations of a literary or artistic work, or phonogram or performance and includes cinematographic adaptations or any other form in which the Work may be recast, transformed, or adapted including in any form recognizably derived from the original, except that a work that constitutes a Collection will not be considered an Adaptation for the purpose of this License. For the avoidance of doubt, where the Work is a musical work, performance or phonogram, the synchronization of the Work in timed-relation with a moving image ("synching") will be considered an Adaptation for the purpose of this License.
|
||||
"Collection" means a collection of literary or artistic works, such as encyclopedias and anthologies, or performances, phonograms or broadcasts, or other works or subject matter other than works listed in Section 1(f) below, which, by reason of the selection and arrangement of their contents, constitute intellectual creations, in which the Work is included in its entirety in unmodified form along with one or more other contributions, each constituting separate and independent works in themselves, which together are assembled into a collective whole. A work that constitutes a Collection will not be considered an Adaptation (as defined above) for the purposes of this License.
|
||||
"Distribute" means to make available to the public the original and copies of the Work through sale or other transfer of ownership.
|
||||
"Licensor" means the individual, individuals, entity or entities that offer(s) the Work under the terms of this License.
|
||||
@ -37,7 +37,7 @@ Voluntary License Schemes. The Licensor reserves the right to collect royalties,
|
||||
Except as otherwise agreed in writing by the Licensor or as may be otherwise permitted by applicable law, if You Reproduce, Distribute or Publicly Perform the Work either by itself or as part of any Collections, You must not distort, mutilate, modify or take other derogatory action in relation to the Work which would be prejudicial to the Original Author's honor or reputation.
|
||||
|
||||
5. Representations, Warranties and Disclaimer
|
||||
UNLESS OTHERWISE MUTUALLY AGREED BY THE PARTIES IN WRITING, LICENSOR OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS, WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.
|
||||
UNLESS OTHERWISE MUTUALLY AGREED BY THE PARTIES IN WRITING, LICENSOR OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY, FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS, WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.
|
||||
|
||||
6. Limitation on Liability.
|
||||
EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE LAW, IN NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES ARISING OUT OF THIS LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
|
||||
|
@ -289,7 +289,7 @@
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>你是如何实施从某个阶段而不是从最开始构建的选项?</summary><br><b>
|
||||
<summary>你是如何实施从某个阶段而不是从最开始构建的选项?<summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -924,7 +924,7 @@ Zombie(假死态)
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>你能解释一下网络进程/连接如何建立以及如何终止?</summary><br></b>
|
||||
<summary>你能解释一下网络进程/连接如何建立以及如何终止?><br></b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -1385,7 +1385,6 @@ Terraform与其他工具相比的优势:
|
||||
* Provider
|
||||
* Resource
|
||||
* Provisioner
|
||||
</summary>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -1657,7 +1656,7 @@ Docker Cloud构建在Docker Hub之上,因此Docker Cloud提供了
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>解释一下递归</summary><br><b>
|
||||
<summary>解释一下递归</summary<br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -1952,11 +1951,11 @@ with open('file.txt', 'w') as file:
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>如何用 "blue" 替换字符串 "green"?</summary><br><b>
|
||||
<summay>如何用 "blue" 替换字符串 "green"?</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>如何找到一个变量中的所有IP地址? 如何在文件中找到它们?</summary><br><b>
|
||||
<summay>如何找到一个变量中的所有IP地址? 如何在文件中找到它们?</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -2073,7 +2072,6 @@ def reverse_string(string):
|
||||
* Mergesort
|
||||
* Bucket Sort
|
||||
* Radix Sort
|
||||
</summary>
|
||||
</b></details>
|
||||
|
||||
<a name="python-advanced"></a>
|
||||
@ -2112,7 +2110,7 @@ def reverse_string(string):
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>你可以在Python中实现链接链表吗?</summary><br><b>
|
||||
<summary>你可以在Python中实现链接链表吗?<br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -2834,7 +2832,7 @@ where c.Customer_ID in (Select Customer_ID from cat_food);
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>详细描述如何使用可以从云外部访问的IP来启动实例</summary><br><b>
|
||||
<summmary>详细描述如何使用可以从云外部访问的IP来启动实例</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
|
@ -1,3 +0,0 @@
|
||||
## AWS Certification Paths
|
||||
|
||||
[AWS Certification Paths based on Cloud Roles and Responsibilities](https://d1.awsstatic.com/training-and-certification/docs/AWS_certification_paths.pdf)
|
@ -24,7 +24,7 @@ SAAS
|
||||
* IAAS
|
||||
* PAAS
|
||||
* SAAS</summary><br><b>
|
||||
- IAAS - Infrastructure As A Service is a cloud computing service where a cloud provider rents out IT infrastructure such as compute, networking resources and storage over the internet.<br>
|
||||
- IAAS - Infrastructure As A Service is a cloud computing service where a cloud provider rents out IT infrastructure such as compute, networking resources and strorage over the internet.<br>
|
||||
|
||||
- PAAS - Platform As A Service is a cloud hosting platform with an on-demand access to ready-to-use set of deployment, application management and DevOps tools.<br>
|
||||
|
||||
@ -400,8 +400,8 @@ Learn more [here](https://aws.amazon.com/snowmobile)
|
||||
<details>
|
||||
<summary>What is IAM? What are some of its features?</summary><br><b>
|
||||
|
||||
IAM stands for Identity and Access Management, and is used for managing users, groups, access policies & roles
|
||||
Full explanation is [here](https://aws.amazon.com/iam)
|
||||
In short: it's used for managing users, groups, access policies & roles
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -432,7 +432,7 @@ False. Users can belong to multiple groups.
|
||||
<summary>What are Roles?</summary><br><b>
|
||||
|
||||
A way for allowing a service of AWS to use another service of AWS. You assign roles to AWS resources.
|
||||
For example, you can make use of a role which allows EC2 service to accesses s3 buckets (read and write).
|
||||
For example, you can make use of a role which allows EC2 service to acesses s3 buckets (read and write).
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -570,7 +570,7 @@ Read more about it [here](https://aws.amazon.com/sns)
|
||||
<details>
|
||||
<summary>What is the shared responsibility model? What AWS is responsible for and what the user is responsible for based on the shared responsibility model?</summary><br><b>
|
||||
|
||||
The shared responsibility model defines what the customer is responsible for and what AWS is responsible for. For example, AWS is responsible for security "of" the cloud, while the customer is responsible for security "in" the cloud.
|
||||
The shared responsibility model defines what the customer is responsible for and what AWS is responsible for.
|
||||
|
||||
More on the shared responsibility model [here](https://aws.amazon.com/compliance/shared-responsibility-model)
|
||||
</b></details>
|
||||
@ -611,8 +611,6 @@ Learn more [here](https://aws.amazon.com/inspector)
|
||||
|
||||
<details>
|
||||
<summary>What is AWS Guarduty?</summary><br><b>
|
||||
|
||||
Guarduty is a threat detection service that monitors your AWS accounts to help detect and mitigate malicious activity
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -623,8 +621,6 @@ AWS definition: "AWS Shield is a managed Distributed Denial of Service (DDoS) pr
|
||||
|
||||
<details>
|
||||
<summary>What is AWS WAF? Give an example of how it can used and describe what resources or services you can use it with</summary><br><b>
|
||||
|
||||
An AWS Web Application Firewall (WAF) can filter out unwanted web traffic (bots), and protect against attacks like SQL injection and cross-site scripting. One service you could use it with would be Amazon CloudFront, a CDN service, to block attacks before they reach your origin servers
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -701,11 +697,6 @@ Learn more [here](https://aws.amazon.com/certificate-manager)
|
||||
|
||||
<details>
|
||||
<summary>What is AWS RDS?</summary><br><b>
|
||||
|
||||
Amazon Relational Database Service (RDS) is a service for setting up and managing resizable, cost-efficient relational databases
|
||||
resource
|
||||
|
||||
Learn more [here](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html)
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -739,7 +730,7 @@ Learn more [here](https://aws.amazon.com/dynamodb/dax)
|
||||
<details>
|
||||
<summary>What is AWS Redshift and how is it different than RDS?</summary><br><b>
|
||||
|
||||
AWS Redshift is a cloud data warehousing service that is geared towards handling massive amounts of data (think petabytes) and being able to execute complex queries. In contrast, Amazon RDS is best suited for things like web applications requiring simple queries with more frequent transactions, and on a smaller scale.
|
||||
cloud data warehouse
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -824,7 +815,7 @@ CloudFormation
|
||||
<details>
|
||||
<summary>Which service would you use for building a website or web application?</summary><br><b>
|
||||
|
||||
Lightsail or Elastic Beanstalk
|
||||
Lightsail
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
|
@ -55,7 +55,7 @@ False. Users can belong to multiple groups.
|
||||
<summary>What are Roles?</summary><br><b>
|
||||
|
||||
A way for allowing a service of AWS to use another service of AWS. You assign roles to AWS resources.
|
||||
For example, you can make use of a role which allows EC2 service to accesses s3 buckets (read and write).
|
||||
For example, you can make use of a role which allows EC2 service to acesses s3 buckets (read and write).
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
|
@ -22,8 +22,4 @@ Azure Firewall is a cloud-native and intelligent network firewall security servi
|
||||
</b></details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>What is Network Security Group?</summary><br><b>
|
||||
|
||||
A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.
|
||||
</b></details>
|
||||
|
Binary file not shown.
Before Width: | Height: | Size: 8.7 KiB |
Before Width: | Height: | Size: 9.2 KiB After Width: | Height: | Size: 9.2 KiB |
@ -112,7 +112,7 @@ Be familiar with the company you are interviewing at. Some ideas:
|
||||
|
||||
From my experience, this is not done by many candidates but it's one of the best ways to deep dive into topics like operating system, virtualization, scale, distributed systems, etc.
|
||||
|
||||
In most cases, you will do fine without reading books but for the AAA interviews (hardest level) you'll want to read some books and overall if you inspire to be better DevOps Engineer, books (also articles, blog posts) is a great way develop yourself :)
|
||||
In most cases, you will do fine without reading books but for the AAA interviews (hardest level) you'll want to read some books and overall if you inspire to be better DevOps Engineer, books (also articles, blog posts) is a great way devleop yourself :)
|
||||
|
||||
### Consider starting in non-DevOps position
|
||||
|
||||
|
@ -5,85 +5,89 @@ Question utils functions
|
||||
import pathlib
|
||||
from random import choice
|
||||
from typing import List
|
||||
import re
|
||||
|
||||
p = pathlib.Path(__file__).parent.parent.joinpath("README.md")
|
||||
p = pathlib.Path(__file__).parent.parent.joinpath('README.md')
|
||||
|
||||
|
||||
def get_file_list():
|
||||
file_list = ""
|
||||
with open(p, "rb") as f:
|
||||
for line in f.readlines():
|
||||
file_list += line.rstrip().decode()
|
||||
with open(p, 'rb') as f:
|
||||
file_list = [line.rstrip() for line in f.readlines()]
|
||||
return file_list
|
||||
|
||||
|
||||
def get_question_list(file_list: List[str]) -> list:
|
||||
file_list = re.findall("<details>(.*?)</details>", file_list)
|
||||
def get_question_list(file_list: List[bytes]) -> list:
|
||||
|
||||
questions_list = []
|
||||
for i in file_list:
|
||||
q = re.findall(r"<summary>(.*?)</summary>", i)[0]
|
||||
questions_list.append(q)
|
||||
temp = []
|
||||
after_summary_tag = False
|
||||
|
||||
for line in file_list:
|
||||
if line.startswith(b'<details>'):
|
||||
temp.append(line)
|
||||
after_summary_tag = True
|
||||
|
||||
elif after_summary_tag and line != b'' and b'</details>' not in line:
|
||||
temp.append(line)
|
||||
|
||||
elif after_summary_tag and b'</details>' in line:
|
||||
temp.append(line)
|
||||
after_summary_tag = False
|
||||
|
||||
questions_list.append(temp)
|
||||
temp = []
|
||||
|
||||
return questions_list
|
||||
|
||||
|
||||
def get_answered_questions(question_list: List[str]) -> list:
|
||||
def get_answered_questions(question_list: List[List[bytes]]) -> list:
|
||||
"""Dont let the type hint confuse you, problem of not using classes.
|
||||
|
||||
It takes the result of get_question_list(file_list)
|
||||
|
||||
Returns a list of questions that are answered.
|
||||
"""
|
||||
|
||||
t = []
|
||||
question_list = re.findall("<details>(.*?)</details>", question_list)
|
||||
for i in question_list:
|
||||
q = re.findall(r"<summary>(.*?)</summary>", i)
|
||||
if q and q[0] == "":
|
||||
continue
|
||||
a = re.findall(r"<b>(.*?)</b>", i)
|
||||
if a and a[0] == "":
|
||||
continue
|
||||
else:
|
||||
t.append(q[0])
|
||||
|
||||
for q in question_list:
|
||||
|
||||
index = 0
|
||||
|
||||
for i in q:
|
||||
if b'</summary>' in i:
|
||||
index = q.index(i)
|
||||
|
||||
if q[index+1: len(q) - 1]:
|
||||
t.append(q)
|
||||
|
||||
return t
|
||||
|
||||
|
||||
def get_answers_count() -> List:
|
||||
"""
|
||||
Return [answer_questions,all_questions] ,PASS complete. FAIL incomplete.
|
||||
>>> get_answers_count()
|
||||
[463, 463]
|
||||
"""
|
||||
ans_questions = get_answered_questions(get_file_list())
|
||||
len_ans_questions = len(ans_questions)
|
||||
all_questions = get_question_list(get_file_list())
|
||||
len_all_questions = len(all_questions)
|
||||
return [len_ans_questions, len_all_questions]
|
||||
|
||||
|
||||
def get_challenges_count() -> int:
|
||||
challenges_path = (
|
||||
pathlib.Path(__file__).parent.parent.joinpath("exercises").glob("*.md")
|
||||
)
|
||||
challenges_path = pathlib.Path(__file__).parent.parent.joinpath('exercises').glob('*.md')
|
||||
return len(list(challenges_path))
|
||||
|
||||
|
||||
# WIP WAITING FEEDBACK
|
||||
def get_random_question(question_list: List[str], with_answer=False):
|
||||
def get_random_question(question_list: List[List[bytes]], with_answer=False):
|
||||
if with_answer:
|
||||
return choice(get_answered_questions(question_list))
|
||||
return choice(get_question_list(question_list))
|
||||
return choice(question_list)
|
||||
|
||||
|
||||
"""Use this question_list. Unless you have already opened/worked/need the file, then don't or
|
||||
you will end up doing the same thing twice.
|
||||
|
||||
eg:
|
||||
|
||||
#my_dir/main.py
|
||||
|
||||
from scripts import question_utils
|
||||
|
||||
print(question_utils.get_answered_questions(question_utils.question_list)
|
||||
|
||||
>> 123
|
||||
# noqa: E501
|
||||
|
||||
"""
|
||||
|
||||
if __name__ == "__main__":
|
||||
import doctest
|
||||
|
||||
doctest.testmod()
|
||||
# print(get_question_list(get_file_list()))
|
||||
# print(get_answered_questions(get_file_list()))
|
||||
# print(get_random_question(get_file_list(),True))
|
||||
# print(get_random_question(get_file_list(),False))
|
||||
question_list = get_question_list(get_file_list())
|
||||
|
@ -6,7 +6,7 @@ import os
|
||||
def main():
|
||||
"""Reads through README.md for question/answer pairs and adds them to a
|
||||
list to randomly select from and quiz yourself.
|
||||
Supports skipping questions with no documented answer with the -s flag
|
||||
Supports skipping quesitons with no documented answer with the -s flag
|
||||
"""
|
||||
parser = optparse.OptionParser()
|
||||
parser.add_option("-s", "--skip", action="store_true",
|
||||
|
@ -1,15 +1,5 @@
|
||||
#!/usr/bin/env bash
|
||||
#!/bin/bash
|
||||
# These are the same steps we are running in Travis CI
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
PROJECT_DIR="$(dirname $(readlink -f ${BASH_SOURCE[0]}))/.."
|
||||
|
||||
MD_FILES=$(find ${PROJECT_DIR} -name "*.md" -not -path "${PROJECT_DIR}/tests/*")
|
||||
|
||||
for file in ${MD_FILES[@]}; do
|
||||
python ${PROJECT_DIR}/tests/syntax_lint.py ${file} > /dev/null
|
||||
done
|
||||
|
||||
echo "- Syntax lint tests on MD files passed successfully"
|
||||
|
||||
flake8 --max-line-length=100 . && echo "- PEP8 Passed"
|
||||
python $(dirname "$0")/../tests/syntax_lint.py
|
||||
flake8 --max-line-length=100 . && echo "PEP8 Passed"
|
||||
|
@ -11,10 +11,12 @@ $ python tests/syntax_lint.py
|
||||
|
||||
"""
|
||||
|
||||
import sys
|
||||
import pathlib
|
||||
|
||||
p = sys.argv[1]
|
||||
p = pathlib.Path(__file__).parent.parent.joinpath('README.md')
|
||||
|
||||
with open(p, 'rb') as f:
|
||||
file_list = [line.rstrip() for line in f.readlines()]
|
||||
|
||||
errors = []
|
||||
|
||||
@ -29,9 +31,9 @@ def count_details(file_list):
|
||||
details_count = 0
|
||||
|
||||
for line_number, line in enumerate(file_list):
|
||||
if b"<details>" in line:
|
||||
if b'<details>' in line:
|
||||
details_count += 1
|
||||
if b"</details>" in line:
|
||||
if b'</details>' in line:
|
||||
details_final_count += 1
|
||||
|
||||
return details_count == details_final_count
|
||||
@ -47,9 +49,9 @@ def count_summary(file_list):
|
||||
details_count = 0
|
||||
|
||||
for line_number, line in enumerate(file_list):
|
||||
if b"<summary>" in line:
|
||||
if b'<summary>' in line:
|
||||
details_count += 1
|
||||
if b"</summary>" in line:
|
||||
if b'</summary>' in line:
|
||||
details_final_count += 1
|
||||
|
||||
return details_count == details_final_count
|
||||
@ -68,22 +70,22 @@ def check_details_tag(file_list):
|
||||
|
||||
after_detail = False
|
||||
error = False
|
||||
err_message = ""
|
||||
err_message = ''
|
||||
for line_number, line in enumerate(file_list):
|
||||
if b"<details>" in line and b"</details>" in line:
|
||||
if b'<details>' in line and b'</details>' in line:
|
||||
pass
|
||||
else:
|
||||
if b"<details>" in line and after_detail:
|
||||
err_message = f"Missing closing detail tag round line {line_number - 1}"
|
||||
if b'<details>' in line and after_detail:
|
||||
err_message = f'Missing closing detail tag round line {line_number - 1}'
|
||||
error = True
|
||||
if b"</details>" in line and not after_detail:
|
||||
err_message = f"Missing opening detail tag round line {line_number - 1}"
|
||||
if b'</details>' in line and not after_detail:
|
||||
err_message = f'Missing opening detail tag round line {line_number - 1}'
|
||||
error = True
|
||||
|
||||
if b"<details>" in line:
|
||||
if b'<details>' in line:
|
||||
after_detail = True
|
||||
|
||||
if b"</details>" in line and after_detail:
|
||||
if b'</details>' in line and after_detail:
|
||||
after_detail = False
|
||||
|
||||
if error:
|
||||
@ -105,26 +107,22 @@ def check_summary_tag(file_list):
|
||||
|
||||
after_summary = False
|
||||
error = False
|
||||
err_message = ""
|
||||
for idx, line in enumerate(file_list):
|
||||
line_number = idx + 1
|
||||
if b"<summary>" in line and b"</summary>" in line:
|
||||
if after_summary:
|
||||
err_message = f"Missing closing summary tag around line {line_number}"
|
||||
error = True
|
||||
|
||||
err_message = ''
|
||||
for line_number, line in enumerate(file_list):
|
||||
if b'<summary>' in line and b'</summary>' in line:
|
||||
pass
|
||||
else:
|
||||
if b"<summary>" in line and after_summary:
|
||||
err_message = f"Missing closing summary tag around line {line_number}"
|
||||
if b'<summary>' in line and after_summary:
|
||||
err_message = f'Missing closing summary tag around line {line_number}'
|
||||
error = True
|
||||
if b"</summary>" in line and not after_summary:
|
||||
err_message = f"Missing opening summary tag around line {line_number}"
|
||||
if b'</summary>' in line and not after_summary:
|
||||
err_message = f'Missing opening summary tag around line {line_number}'
|
||||
error = True
|
||||
|
||||
if b"<summary>" in line:
|
||||
if b'<summary>' in line:
|
||||
after_summary = True
|
||||
|
||||
if b"</summary>" in line and after_summary:
|
||||
if b'</summary>' in line and after_summary:
|
||||
after_summary = False
|
||||
|
||||
if error:
|
||||
@ -133,20 +131,12 @@ def check_summary_tag(file_list):
|
||||
error = False
|
||||
|
||||
|
||||
def check_md_file(file_name):
|
||||
with open(p, "rb") as f:
|
||||
file_list = [line.rstrip() for line in f.readlines()]
|
||||
if __name__ == '__main__':
|
||||
check_details_tag(file_list)
|
||||
check_summary_tag(file_list)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print(f"..........Checking {p}..........")
|
||||
check_md_file(p)
|
||||
if errors:
|
||||
print(f"{p} failed", file=sys.stderr)
|
||||
for error in errors:
|
||||
print(error, file=sys.stderr)
|
||||
print(error)
|
||||
exit(1)
|
||||
|
||||
print("Tests passed successfully.")
|
||||
|
@ -352,7 +352,7 @@ A full list can be found at [PlayBook Variables](https://docs.ansible.com/ansib
|
||||
* Host facts override play variables
|
||||
* A role might include the following: vars, meta, and handlers
|
||||
* Dynamic inventory is generated by extracting information from external sources
|
||||
* It’s a best practice to use indentation of 2 spaces instead of 4
|
||||
* It’s a best practice to use indention of 2 spaces instead of 4
|
||||
* ‘notify’ used to trigger handlers
|
||||
* This “hosts: all:!controllers” means ‘run only on controllers group hosts</summary><br><b>
|
||||
</b></details>
|
||||
@ -509,9 +509,6 @@ If your group has 8 hosts. It will run the whole play on 4 hosts and then the sa
|
||||
|
||||
<details>
|
||||
<summary>What is Molecule? How does it works?</summary><br><b>
|
||||
|
||||
It's used to rapidy develop and test Ansbile roles. Molecule can be used to test Ansible roles against a varaitey of Linux Distros at the same time. This testing ability helps instill confidence of the automation today and as time go on while a role is maintined.
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
|
@ -134,7 +134,7 @@ The answer is yes, it's possible. You can configure ArgoCD to sync to desired st
|
||||
<details>
|
||||
<summary>How cluster disaster recovery becomes easier with ArgoCD?</summary><br><b>
|
||||
|
||||
Imagine you have a cluster in the cloud, in one of the regions. Something happens to that cluster and it's either crashes or simply no longer operational.
|
||||
Imagine you have a cluster in the cloud, in one of the regions. Something happens to that cluster and it's either crashes or simply no longer opertional.
|
||||
|
||||
If you have all your cluster configuration in a GitOps repository, ArgoCD can be pointed to that repository while be configured to use a new cluster you've set up and apply that configuration so your cluster is again up and running with the same status as o
|
||||
</b></details>
|
||||
@ -335,7 +335,7 @@ There are multiple ways to deal with it:
|
||||
<summary>What are some possible health statuses for an ArgoCD application?</summary><br><b>
|
||||
|
||||
* Healthy
|
||||
* Missing: resource doesn't exist in the cluster
|
||||
* Missing: resource doesn't exist in the cluser
|
||||
* Suspended: resource is paused
|
||||
* Progressing: resources isn't healthy but will become healthy or has the chance to become healthy
|
||||
* Degraded: resource isn't healthy
|
||||
|
@ -7,7 +7,7 @@
|
||||
|
||||
## Objectives
|
||||
|
||||
1. Using the CLI or the UI, create a new application with the following properties:
|
||||
1. Using the CLI or the UI, create a a new application with the following properties:
|
||||
1. app name: app-demo
|
||||
2. project: app-project
|
||||
3. repository URL: your repo with some k8s manifests
|
||||
|
@ -7,7 +7,7 @@
|
||||
|
||||
## Objectives
|
||||
|
||||
1. Using the CLI or the UI, create a new application with the following properties:
|
||||
1. Using the CLI or the UI, create a a new application with the following properties:
|
||||
1. app name: app-demo
|
||||
2. project: app-project
|
||||
3. repository URL: your repo with some k8s manifests
|
||||
|
@ -14,4 +14,4 @@
|
||||
|
||||
## Solution
|
||||
|
||||
Click [here](solution.md) to view the solution
|
||||
Click [here](soltuion.md) to view the solution
|
@ -497,7 +497,7 @@ EBS
|
||||
<details>
|
||||
<summary>What happens to EBS volumes when the instance is terminated?</summary><br><b>
|
||||
|
||||
By default, the root volume is marked for deletion, while other volumes will still remain.<br>
|
||||
By deafult, the root volume is marked for deletion, while other volumes will still remain.<br>
|
||||
You can control what will happen to every volume upon termination.
|
||||
</b></details>
|
||||
|
||||
@ -1112,8 +1112,6 @@ Use Elastic IP which provides you a fixed IP address.
|
||||
|
||||
<details>
|
||||
<summary>When creating a new VPC, there is an option called "Tenancy". What is it used for?</summary><br><b>
|
||||
|
||||
[AWS Docs](https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html): `Tenancy` option defines if EC2 instances that you launch into the VPC will run on hardware that's shared with other AWS accounts or on hardware that's dedicated for your use only.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -1258,7 +1256,7 @@ This not only provides enhanced security but also easier access for the user whe
|
||||
|
||||
- Uploading images to S3 and tagging them or inserting information on the images to a database
|
||||
- Uploading videos to S3 and edit them or add subtitles/captions to them and store the result in S3
|
||||
- Use SNS and/or SQS to trigger functions based on notifications or messages received from these services.
|
||||
- Use SNS and/or SQS to trigger functions based on notifications or messages receieved from these services.
|
||||
- Cron Jobs: Use Lambda together with CloudWatch events to schedule tasks/functions periodically.
|
||||
</b></details>
|
||||
|
||||
@ -1853,7 +1851,7 @@ False. It's disabled by default
|
||||
<details>
|
||||
<summary>True or False? In regards to cross zone load balancing, AWS charges you for inter AZ data in network load balancer but no in application load balancer</summary><br><b>
|
||||
|
||||
True. It charges for inter AZ data in network load balancer, but not in application load balancer
|
||||
False. It charges for inter AZ data in network load balancer, but not in application load balancer
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -2594,7 +2592,7 @@ AWS Cognito
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Which service is often referred to as "used for decoupling applications"?</summary><br><b>
|
||||
<summary>Which service is often reffered to as "used for decoupling applications"?</summary><br><b>
|
||||
|
||||
AWS SQS. Since it's a messaging queue so it allows applications to switch from synchronous communication to asynchronous one.
|
||||
</b></details>
|
||||
|
@ -23,37 +23,3 @@ As you probably know at this point, it's not recommended to work with the root a
|
||||
10. Click on "Next: Tags"
|
||||
11. Add a tag with the key `Role` and the value `DevOps`
|
||||
12. Click on "Review" and then create on "Create user"
|
||||
|
||||
13. ### Solution using Terraform
|
||||
|
||||
```
|
||||
|
||||
resource "aws_iam_group_membership" "team" {
|
||||
name = "tf-testing-group-membership"
|
||||
|
||||
users = [
|
||||
aws_iam_user.newuser.name,
|
||||
|
||||
]
|
||||
|
||||
group = aws_iam_group.admin.name
|
||||
}
|
||||
|
||||
resource "aws_iam_group_policy_attachment" "test-attach" {
|
||||
group = aws_iam_group.admin.name
|
||||
policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
|
||||
}
|
||||
resource "aws_iam_group" "admin" {
|
||||
name = "admin"
|
||||
}
|
||||
|
||||
resource "aws_iam_user" "newuser" {
|
||||
name = "newuser"
|
||||
path = "/system/"
|
||||
|
||||
tags = {
|
||||
Role = "DevOps"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -30,17 +30,3 @@ MFA:
|
||||
3. Expand "Multi-factor authentication (MFA)" and click on "Activate MFA"
|
||||
4. Choose one of the devices
|
||||
5. Follow the instructions to set it up and click on "Assign MFA"
|
||||
|
||||
6. ### Solution using Terraform:
|
||||
|
||||
```
|
||||
resource "aws_iam_account_password_policy" "strict" {
|
||||
minimum_password_length = 8
|
||||
require_numbers = true
|
||||
allow_users_to_change_password = true
|
||||
password_reuse_prevention = 1
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** You cannot add MFA through terraform, you have to do it in the GUI.
|
||||
|
||||
|
@ -17,7 +17,7 @@ aws.s3.BucketObject("bucketObject",
|
||||
|
||||
# Public Bucket
|
||||
aws.s3.Bucket("my-first-public-bucket",
|
||||
acl="public-read",
|
||||
acl="private",
|
||||
tags={
|
||||
"Environment": "Exercise",
|
||||
"Name": "My First Public Bucket"},
|
||||
|
@ -9,7 +9,7 @@ Initialize a CDK project and set up files required to build a CDK project.
|
||||
#### Initialize a CDK project
|
||||
|
||||
1. Install CDK on your machine by running `npm install -g aws-cdk`.
|
||||
2. Create a new directory named `sample` for your project and run `cdk init app --language typescript` to initialize a CDK project. You can choose language as csharp, fsharp, go, java, javascript, python or typescript.
|
||||
2. Create a new directory named `sample` for your project and run `cdk init app --language typescript` to initialize a CDK project. You can choose lanugage as csharp, fsharp, go, java, javascript, python or typescript.
|
||||
3. You would see the following files created in your directory:
|
||||
1. `cdk.json`, `tsconfig.json`, `package.json` - These are configuration files that are used to define some global settings for your CDK project.
|
||||
2. `bin/sample.ts` - This is the entry point for your CDK project. This file is used to define the stack that you want to create.
|
||||
|
@ -33,9 +33,10 @@ An availability set is a logical grouping of VMs that allows Azure to understand
|
||||
|
||||
<details>
|
||||
<summary>What is Azure Policy?</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
|
||||
[Microsoft Learn](https://learn.microsoft.com/en-us/azure/governance/policy/overview): "Azure Policy helps to enforce organizational standards and to assess compliance at-scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources."
|
||||
<details>
|
||||
<summary>What is the Azure Resource Manager? Can you describe the format for ARM templates?</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -51,24 +52,6 @@ From [Azure docs](https://docs.microsoft.com/en-us/azure/azure-resource-manager/
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
|
||||
<summary>What are the ARM template's sections ?</summary><br><b>
|
||||
|
||||
[Microsoft Learn](https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/overview): The template has the following sections:
|
||||
|
||||
Parameters - Provide values during deployment that allow the same template to be used with different environments.
|
||||
|
||||
Variables - Define values that are reused in your templates. They can be constructed from parameter values.
|
||||
|
||||
User-defined functions - Create customized functions that simplify your template.
|
||||
|
||||
Resources - Specify the resources to deploy.
|
||||
|
||||
Outputs - Return values from the deployed resources.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
|
||||
<summary>What's an Azure Resource Group?</summary><br><b>
|
||||
|
||||
From [Azure docs](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal): "A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group."
|
||||
@ -83,20 +66,19 @@ From [Azure docs](https://docs.microsoft.com/en-us/azure/azure-resource-manager/
|
||||
* Azure Batch
|
||||
* Azure Service Fabric
|
||||
* Azure Container Instances
|
||||
* Azure Virtual Machine Scale Sets
|
||||
* Azure Virtual Machine Scale Set?s
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
|
||||
<summary>What "Azure Virtual Machines" service is used for?</summary><br><b>
|
||||
|
||||
Azure VMs support Windows and Linux OS. They can be used for hosting web servers, applications, backups, Databases, they can also be used as jump server or azure self-hosted agent for building and deploying apps.
|
||||
Windows or Linux virtual machines
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What "Azure Virtual Machine Scale Sets" service is used for?</summary><br><b>
|
||||
|
||||
Scaling Linux or Windows virtual machines; it lets you create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule.
|
||||
Scaling Linux or Windows virtual machines used in Azure
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -136,24 +118,14 @@ Running parallel and high-performance computing applications
|
||||
|
||||
<details>
|
||||
<summary>What Azure network services are you familiar with?</summary><br><b>
|
||||
</b></details>
|
||||
<details>
|
||||
<summary>Explain VNet peering</summary><br><b>
|
||||
|
||||
VNet peering enables connecting virtual networks. This means that you can route traffic between resources of the connected VNets privately through IPv4 addresses. Connecting VNets within the same region is known as regional VNet Peering, however connecting VNets across Azure regions is known as global VNet Peering.
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What's an Azure region?</summary><br><b>
|
||||
|
||||
An Azure region is a set of datacenters deployed within an interval-defined and connected through a dedicated regional low-latency network.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is the N-tier architecture?</summary><br><b>
|
||||
|
||||
N-tier architecture divides an application into logical layers and physical tiers. Each layer has a specific responsibility. Tiers are physically separated, running on separate machines. An N-tier application can have a closed layer architecture or an open layer architecture. N-tier architectures are typically implemented as infrastructure-as-service (IaaS) applications, with each tier running on a separate set of VMs
|
||||
</b></details>
|
||||
|
||||
### Storage
|
||||
@ -207,7 +179,10 @@ Azure AD is a cloud-based identity service. You can use it as a standalone servi
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Explain VNet peering</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Which protocols are available for configuring health probe</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
|
@ -29,17 +29,3 @@ According to [Gremlin](gremlin.com) there are three steps:
|
||||
The process then repeats itself either with same scenario or a new one.
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Cite a few tools used to operate Chaos exercises</summary><br><b>
|
||||
|
||||
- AAWS Fault Injection Simulator: inject failures in AWS resources
|
||||
- Azure Chaos Studio: inject failures in Azure resources
|
||||
- Chaos Monkey: one of the most famous tools to orchestrate Chaos on diverse Cloud providers
|
||||
- Litmus - A Framework for Kubernetes
|
||||
- Chaos Mesh: for Cloud Kubernetes platforms
|
||||
|
||||
|
||||
See an extensive list [here](https://github.com/dastergon/awesome-chaos-engineering)
|
||||
|
||||
</b></details>
|
@ -7,7 +7,7 @@
|
||||
| Set up a CI pipeline | CI | [Exercise](ci_for_open_source_project.md) | | |
|
||||
| Deploy to Kubernetes | Deployment | [Exercise](deploy_to_kubernetes.md) | [Solution](solutions/deploy_to_kubernetes/README.md) | |
|
||||
| Jenkins - Remove Jobs | Jenkins Scripts | [Exercise](remove_jobs.md) | [Solution](solutions/remove_jobs_solution.groovy) | |
|
||||
| Jenkins - Remove Builds | Jenkins Scripts | [Exercise](remove_builds.md) | [Solution](solutions/remove_builds_solution.groovy) | |
|
||||
| Jenkins - Remove Builds | Jenkins Sripts | [Exercise](remove_builds.md) | [Solution](solutions/remove_builds_solution.groovy) | |
|
||||
|
||||
### CI/CD Self Assessment
|
||||
|
||||
@ -76,18 +76,6 @@ The difference between the two is that Continuous Delivery isn't fully automated
|
||||
|
||||
<details>
|
||||
<summary>You are given a pipeline and a pool with 3 workers: virtual machine, baremetal and a container. How will you decide on which one of them to run the pipeline?</summary><br><b>
|
||||
|
||||
The decision on which type of worker (virtual machine, bare-metal, or container) to use for running a pipeline would depend on several factors, including the nature of the pipeline, the requirements of the software being built, the available resources, and the specific goals and constraints of the development and deployment process. Here are some considerations that can help in making the decision:
|
||||
|
||||
1. Pipeline requirements
|
||||
2. Resource availability
|
||||
3. Scalability and flexibility
|
||||
4. Deployment and isolation requirements
|
||||
5. Security considerations
|
||||
6. Development and operational workflows
|
||||
7. Cost considerations
|
||||
|
||||
Based on these considerations, the appropriate choice of worker (virtual machine, bare-metal, or container) for running the pipeline would be determined by weighing the pros and cons of each option and aligning with the specific requirements, resources, and goals of the development and deployment process. It may also be useful to consult with relevant stakeholders, such as developers, operations, and infrastructure teams, to gather input and make an informed decision.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -103,54 +91,14 @@ There are multiple approaches as to where to store the CI/CD pipeline definition
|
||||
|
||||
<details>
|
||||
<summary>How do you perform plan capacity for your CI/CD resources? (e.g. servers, storage, etc.)</summary><br><b>
|
||||
|
||||
Capacity planning for CI/CD resources involves estimating the resources required to support the CI/CD pipeline and ensuring that the infrastructure has enough capacity to meet the demands of the pipeline. Here are some steps to perform capacity planning for CI/CD resources:
|
||||
|
||||
1. Analyze workload
|
||||
2. Monitor current usage
|
||||
3. Identify resource bottlenecks
|
||||
4. Forecast future demand
|
||||
5. Plan for growth
|
||||
6. Consider scalability and elasticity
|
||||
7. Evaluate cost and budget
|
||||
8. Continuously monitor and adjust
|
||||
|
||||
By following these steps, you can effectively plan the capacity for your CI/CD resources, ensuring that your pipeline has sufficient resources to operate efficiently and meet the demands of your development process.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>How would you structure/implement CD for an application which depends on several other applications?</summary><br><b>
|
||||
|
||||
Implementing Continuous Deployment (CD) for an application that depends on several other applications requires careful planning and coordination to ensure smooth and efficient deployment of changes across the entire ecosystem. Here are some general steps to structure/implement CD for an application with dependencies:
|
||||
|
||||
1. Define the deployment pipeline
|
||||
2. Automate the deployment process
|
||||
3. Version control and dependency management
|
||||
4. Continuous integration and testing
|
||||
5. Rolling deployments
|
||||
6. Monitor and manage dependencies
|
||||
7. Testing across the ecosystem
|
||||
8. Rollback and recovery strategies
|
||||
9. Security and compliance
|
||||
10. Documentation and communication
|
||||
|
||||
Implementing CD for an application with dependencies requires careful planning, coordination, and automation to ensure efficient and reliable deployments. By following best practices such as automation, version control, testing, monitoring, rollback strategies, and effective communication, you can ensure a smooth and successful CD process for your application ecosystem.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>How do you measure your CI/CD quality? Are there any metrics or KPIs you are using for measuring the quality?</summary><br><b>
|
||||
|
||||
Measuring the quality of CI/CD processes is crucial to identify areas for improvement, ensure efficient and reliable software delivery, and achieve continuous improvement. Here are some commonly used metrics and KPIs (Key Performance Indicators) to measure CI/CD quality:
|
||||
|
||||
1. Build Success Rate: This metric measures the percentage of successful builds compared to the total number of builds. A high build success rate indicates that the majority of builds are successful and the CI/CD pipeline is stable.
|
||||
2. Build and Deployment Time: This metric measures the time it takes to build and deploy changes from code commit to production. Faster build and deployment times indicate shorter feedback loops and faster time to market.
|
||||
3. Deployment Frequency: This metric measures the frequency of deployments to production within a given time period. Higher deployment frequency indicates faster release cycles and more frequent updates to production.
|
||||
4. Mean Time to Detect (MTTD): This metric measures the average time it takes to detect issues or defects in the CI/CD pipeline or production environment. Lower MTTD indicates faster detection and resolution of issues, leading to higher quality and more reliable deployments.
|
||||
5. Mean Time to Recover (MTTR): This metric measures the average time it takes to recover from issues or incidents in the CI/CD pipeline or production environment. Lower MTTR indicates faster recovery and reduced downtime, leading to higher availability and reliability.
|
||||
6. Feedback Loop Time: This metric measures the time it takes to receive feedback on code changes, including code reviews, test results, and other feedback mechanisms. Faster feedback loop times enable quicker iterations and faster improvements in the CI/CD process.
|
||||
7. Customer Satisfaction: This metric measures the satisfaction of end-users or customers with the quality and reliability of the deployed software. Higher customer satisfaction indicates that the CI/CD process is delivering high-quality software that meets customer expectations.
|
||||
|
||||
These are just some examples of metrics and KPIs that can be used to measure the quality of CI/CD processes. It's important to choose metrics that align with the goals and objectives of your organization and regularly track and analyze them to continuously improve the CI/CD process and ensure high-quality software delivery.
|
||||
</b></details>
|
||||
|
||||
#### CI/CD - Jenkins
|
||||
@ -171,20 +119,6 @@ Jenkins integrates development life-cycle processes of all kinds, including buil
|
||||
* Bamboo
|
||||
* Teamcity
|
||||
* CircleCI</summary><br><b>
|
||||
|
||||
Jenkins has several advantages over its competitors, including Travis, Bamboo, TeamCity, and CircleCI. Here are some of the key advantages:
|
||||
|
||||
1. Open-source and free
|
||||
2. Customizable and flexible
|
||||
3. Wide range of integrations and Plugins
|
||||
4. Active and supportive community
|
||||
|
||||
When comparing Jenkins to its competitors, there are some key differences in terms of features and capabilities. For example:
|
||||
|
||||
- Travis: Travis is a cloud-based CI/CD platform that is known for its ease of use and fast setup. However, it has fewer customization options and integrations compared to Jenkins.
|
||||
- Bamboo: Bamboo is a CI/CD tool from Atlassian, the makers of JIRA and Confluence. It provides a range of features for building, testing, and deploying software, but it can be more expensive and complex to set up compared to Jenkins.
|
||||
- TeamCity: TeamCity is a CI/CD tool from JetBrains, the makers of IntelliJ IDEA. It provides a range of features for building, testing, and deploying software, but it can be more complex and resource-intensive compared to Jenkins.
|
||||
- CircleCI: CircleCI is a cloud-based CI/CD platform that is known for its fast build times and easy integration with GitHub. However, it can be more expensive compared to Jenkins, especially for larger projects.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -214,52 +148,14 @@ This might be considered to be an opinionated answer:
|
||||
|
||||
<details>
|
||||
<summary>What plugins have you used in Jenkins?</summary><br><b>
|
||||
|
||||
Jenkins has a vast library of plugins, and the most commonly used plugins depend on the specific needs and requirements of each organization. However, here are some of the most popular and widely used plugins in Jenkins:
|
||||
|
||||
Pipeline: This plugin allows users to create and manage complex, multi-stage pipelines using a simple and easy-to-use scripting language. It provides a powerful and flexible way to automate the entire software delivery process, from code commit to deployment.
|
||||
|
||||
Git: This plugin provides integration with Git, one of the most popular version control systems used today. It allows users to pull code from Git repositories, trigger builds based on code changes, and push code changes back to Git.
|
||||
|
||||
Docker: This plugin provides integration with Docker, a popular platform for building, shipping, and running distributed applications. It allows users to build and run Docker containers as part of their build process, enabling easy and repeatable deployment of applications.
|
||||
|
||||
JUnit: This plugin provides integration with JUnit, a popular unit testing framework for Java applications. It allows users to run JUnit tests as part of their build process and generates reports and statistics on test results.
|
||||
|
||||
Cobertura: This plugin provides code coverage reporting for Java applications. It allows users to measure the code coverage of their tests and generate reports on which parts of the code are covered by tests.
|
||||
|
||||
Email Extension: This plugin provides advanced email notification capabilities for Jenkins. It allows users to customize the content and format of email notifications, including attachments, and send notifications to specific users or groups based on build results.
|
||||
|
||||
Artifactory: This plugin provides integration with Artifactory, a popular artifact repository for storing and managing binaries and dependencies. It allows users to publish and retrieve artifacts from Artifactory as part of their build process.
|
||||
|
||||
SonarQube: This plugin provides integration with SonarQube, a popular code quality analysis tool. It allows users to run code quality checks and generate reports on code quality metrics such as code complexity, code duplication, and code coverage.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Have you used Jenkins for CI or CD processes? Can you describe them?</summary><br><b>
|
||||
|
||||
Let's assume we have a web application built using Node.js, and we want to automate its build and deployment process using Jenkins. Here is how we can set up a simple CI/CD pipeline using Jenkins:
|
||||
|
||||
1. Install Jenkins: We can install Jenkins on a dedicated server or on a cloud platform such as AWS or Google Cloud.
|
||||
2. Install necessary plugins: Depending on the specific requirements of the project, we may need to install plugins such as NodeJS, Git, Docker, and any other plugins required by the project.
|
||||
3. Create a new job: In Jenkins, a job is a defined set of instructions for automating a particular task. We can create a new job and configure it to build our Node.js application.
|
||||
4. Configure the job: We can configure the job to pull the latest code from the Git repository, install any necessary dependencies using Node.js, run unit tests, and build the application using a build script.
|
||||
5. Set up a deployment environment: We can set up a separate environment for deploying the application, such as a staging or production environment. We can use Docker to create a container image of the application and deploy it to the environment.
|
||||
6. Set up continuous deployment: We can configure the job to automatically deploy the application to the deployment environment if the build and tests pass.
|
||||
7. Monitor and troubleshoot: We can monitor the pipeline for errors or failures and troubleshoot any issues that arise.
|
||||
|
||||
This is just a simple example of a CI/CD pipeline using Jenkins, and the specific implementation details may vary depending on the requirements of the project.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What type of jobs are there? Which types have you used?</summary><br><b>
|
||||
|
||||
In Jenkins, there are various types of jobs, including:
|
||||
|
||||
1. Freestyle job: This is the most common type of job in Jenkins, which allows users to define custom build steps and configure various options, including build triggers, SCM polling, and post-build actions.
|
||||
2. Pipeline job: Pipeline job is a newer feature in Jenkins that allows users to define a pipeline of jobs that can be executed in a specific order. The pipeline can be defined using a Jenkinsfile, which provides a script-like syntax for defining the pipeline stages, steps, and conditions.
|
||||
3. Multi-configuration job: This type of job allows users to execute the same job with multiple configurations, such as different operating systems, browsers, or devices. Jenkins will execute the job for each configuration specified, providing a matrix of results.
|
||||
4. Maven job: This type of job is specifically designed for building Java applications using the Maven build tool. Jenkins will execute the Maven build process, including compiling, testing, and packaging the application.
|
||||
5. Parameterized job: This type of job allows users to define parameters that can be passed into the build process at runtime. Parameters can be used to customize the build process, such as specifying the version number or target environment.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -299,92 +195,18 @@ You can describe the UI way to add new nodes but better to explain how to do in
|
||||
|
||||
<details>
|
||||
<summary>How to acquire multiple nodes for one specific build?</summary><br><b>
|
||||
|
||||
To acquire multiple nodes for a specific build in Jenkins, you can use the "Parallel" feature in the pipeline script. The "Parallel" feature allows you to run multiple stages in parallel, and each stage can run on a different node.
|
||||
|
||||
Here is an example pipeline script that demonstrates how to acquire multiple nodes for a specific build:
|
||||
|
||||
```tsx
|
||||
pipeline {
|
||||
agent any
|
||||
stages {
|
||||
stage('Build') {
|
||||
parallel {
|
||||
stage('Node 1') {
|
||||
agent { label 'node1' }
|
||||
steps {
|
||||
// Run build commands on Node 1
|
||||
}
|
||||
}
|
||||
stage('Node 2') {
|
||||
agent { label 'node2' }
|
||||
steps {
|
||||
// Run build commands on Node 2
|
||||
}
|
||||
}
|
||||
stage('Node 3') {
|
||||
agent { label 'node3' }
|
||||
steps {
|
||||
// Run build commands on Node 3
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
stage('Deploy') {
|
||||
agent any
|
||||
steps {
|
||||
// Deploy the built artifacts
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In this example, the "Build" stage has three parallel stages, each running on a different node labeled as "node1", "node2", and "node3". The "Deploy" stage runs after the build is complete and runs on any available node.
|
||||
|
||||
To use this pipeline script, you will need to have the three nodes (node1, node2, and node3) configured in Jenkins. You will also need to ensure that the necessary build commands and dependencies are installed on each node.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Whenever a build fails, you would like to notify the team owning the job regarding the failure and provide failure reason. How would you do that?</summary><br><b>
|
||||
|
||||
In Jenkins, you can use the "Email Notification" plugin to notify a team when a build fails. Here are the steps to set up email notifications for failed builds:
|
||||
|
||||
1. Install the "Email Notification" plugin if it's not already installed in Jenkins.
|
||||
2. Go to the Jenkins job configuration page and click on "Configure".
|
||||
3. Scroll down to the "Post-build Actions" section and click on "Add post-build action".
|
||||
4. Select "Editable Email Notification" from the list of options.
|
||||
5. Fill out the required fields, such as the recipient email addresses, subject line, and email content. You can use Jenkins environment variables, such as ${BUILD_URL} and ${BUILD_LOG}, to include build-specific information in the email content.
|
||||
6. In the "Advanced Settings" section, select the "Send to recipients" option and choose "Only on failure" from the dropdown menu.
|
||||
7. Click "Save" to save the job configuration.
|
||||
|
||||
With this setup, Jenkins will send an email notification to the specified recipients whenever a build fails, providing them with the failure reason and any other relevant information.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>There are four teams in your organization. How to prioritize the builds of each team? So the jobs of team x will always run before team y for example</summary><br><b>
|
||||
|
||||
In Jenkins, you can prioritize the builds of each team by using the "Priority Sorter" plugin. Here are the steps to set up build prioritization:
|
||||
|
||||
1. Install the "Priority Sorter" plugin if it's not already installed in Jenkins.
|
||||
2. Go to the Jenkins system configuration page and click on "Configure Global Security". Scroll down to the "Access Control" section and click on "Per-project basis".
|
||||
3. In the "Project default actions" section, select "Configure build triggers and execution" from the dropdown menu. Click on "Add user or group" and add the groups that represent each team in your organization.
|
||||
4. Go to each Jenkins job configuration page and click on "Configure". Scroll down to the "Build Environment" section and click on "Add build step". Select "Set build priority with Priority Sorter" from the list of options.
|
||||
5. Set the priority of the job based on the team that owns it. For example, if Team X owns the job, set the priority to a higher value than the jobs owned by Team Y. Click "Save" to save the job configuration.
|
||||
|
||||
With this setup, Jenkins will prioritize the builds of each team based on the priority value set in the job configuration. Jobs owned by Team X will have a higher priority than jobs owned by Team Y, ensuring that they are executed first.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>If you are managing a dozen of jobs, you can probably use the Jenkins UI. But how do you manage the creation and deletion of hundreds of jobs every week/month?</summary><br><b>
|
||||
|
||||
Managing the creation and deletion of hundreds of jobs every week/month in Jenkins can be a daunting task if done manually through the UI. Here are some approaches to manage large numbers of jobs efficiently:
|
||||
|
||||
1. Use job templates
|
||||
2. Use Job DSL
|
||||
3. Use Jenkins REST API
|
||||
4. Use a configuration management tool
|
||||
5. Use a Jenkins job management tool
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -396,111 +218,14 @@ Managing the creation and deletion of hundreds of jobs every week/month in Jenki
|
||||
|
||||
<details>
|
||||
<summary>What is the different between a scripted pipeline to declarative pipeline? Which type are you using?</summary><br><b>
|
||||
|
||||
Jenkins supports two types of pipelines: Scripted pipelines and Declarative pipelines.
|
||||
|
||||
Scripted pipelines use Groovy syntax and provide a high degree of flexibility and control over the build process. Scripted pipelines allow developers to write custom code to handle complex scenarios, but can be complex and hard to maintain.
|
||||
|
||||
Declarative pipelines are a newer feature and provide a simpler way to define pipelines using YAML syntax. Declarative pipelines provide a more structured and opinionated way to define builds, making it easier to get started with pipelines and reducing the risk of errors.
|
||||
|
||||
Some key differences between the two types of pipelines are:
|
||||
|
||||
1. Syntax: Scripted pipelines use Groovy syntax while declarative pipelines use YAML syntax.
|
||||
2. Structure: Declarative pipelines have a more structured format and define specific stages, while scripted pipelines provide more flexibility in defining build stages and steps.
|
||||
3. Error handling: Declarative pipelines provide a more comprehensive error handling system with built-in conditions and actions, while scripted pipelines require more manual error handling.
|
||||
4. Ease of use: Declarative pipelines are easier to use for beginners and provide a simpler syntax, while scripted pipelines require more expertise in Groovy and can be more complex.
|
||||
5. Maintenance: Declarative pipelines are easier to maintain and can be modified with less effort compared to scripted pipelines, which can be more difficult to modify and extend over time.
|
||||
|
||||
I am familiar with both types of pipelines, but generally prefer declarative pipelines for their ease of use and simplicity.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>How would you implement an option of a starting a build from a certain stage and not from the beginning?</summary><br><b>
|
||||
|
||||
To implement an option of starting a build from a certain stage and not from the beginning in a Jenkins pipeline, we can use the `when` directive along with a custom parameter to determine the starting stage. Here are the steps to implement this:
|
||||
|
||||
1. Add a custom parameter to the pipeline. This parameter can be a simple string or a more complex data type like a map.
|
||||
|
||||
```tsx
|
||||
parameters {
|
||||
string(name: 'START_STAGE', defaultValue: '', description: 'The name of the stage to start the build from')
|
||||
}
|
||||
```
|
||||
|
||||
2. Use the `when` directive to conditionally execute stages based on the value of the `START_STAGE` parameter.
|
||||
|
||||
```tsx
|
||||
stage('Build') {
|
||||
when {
|
||||
expression {
|
||||
params.START_STAGE == '' || currentStage.name == params.START_STAGE
|
||||
}
|
||||
}
|
||||
// Build steps go here
|
||||
}
|
||||
|
||||
stage('Test') {
|
||||
when {
|
||||
expression {
|
||||
params.START_STAGE == '' || currentStage.name == params.START_STAGE || previousStage.result == 'SUCCESS'
|
||||
}
|
||||
}
|
||||
// Test steps go here
|
||||
}
|
||||
|
||||
stage('Deploy') {
|
||||
when {
|
||||
expression {
|
||||
params.START_STAGE == '' || currentStage.name == params.START_STAGE || previousStage.result == 'SUCCESS'
|
||||
}
|
||||
}
|
||||
// Deploy steps go here
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
In this example, we use the `when` directive to execute each stage only if the `START_STAGE` parameter is empty or matches the current stage's name. Additionally, for the `Test` and `Deploy` stages, we also check if the previous stage executed successfully before running.
|
||||
|
||||
3. Trigger the pipeline and pass the `START_STAGE` parameter as needed.
|
||||
|
||||
```tsx
|
||||
pipeline {
|
||||
agent any
|
||||
parameters {
|
||||
string(name: 'START_STAGE', defaultValue: '', description: 'The name of the stage to start the build from')
|
||||
}
|
||||
stages {
|
||||
stage('Build') {
|
||||
// Build steps go here
|
||||
}
|
||||
stage('Test') {
|
||||
// Test steps go here
|
||||
}
|
||||
stage('Deploy') {
|
||||
// Deploy steps go here
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
When triggering the pipeline, you can pass the `START_STAGE` parameter to start the build from a specific stage.
|
||||
|
||||
For example, if you want to start the build from the `Test` stage, you can trigger the pipeline with the `START_STAGE` parameter set to `'Test'`:
|
||||
|
||||
```tsx
|
||||
pipeline?START_STAGE=Test
|
||||
```
|
||||
|
||||
This will cause the pipeline to skip the `Build` stage and start directly from the `Test` stage.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Do you have experience with developing a Jenkins plugin? Can you describe this experience?</summary><br><b>
|
||||
|
||||
Developing a Jenkins plugin requires knowledge of Java and familiarity with Jenkins API. The process typically involves setting up a development environment, creating a new plugin project, defining the plugin's extension points, and implementing the desired functionality using Java code. Once the plugin is developed, it can be packaged and deployed to Jenkins.
|
||||
|
||||
The Jenkins plugin ecosystem is extensive, and there are many resources available to assist with plugin development, including documentation, forums, and online communities. Additionally, Jenkins provides tools such as Jenkins Plugin POM Generator and Jenkins Plugin Manager to help with plugin development and management.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -546,7 +271,7 @@ For example, you might configure the workflow to trigger every time a changed is
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>True or False? In Github Actions, jobs are executed in parallel by default</summary><br><b>
|
||||
<summary>True or False? In Github Actions, jobs are executed in parallel by deafult</summary><br><b>
|
||||
|
||||
True
|
||||
</b></details>
|
||||
|
@ -1,5 +1,5 @@
|
||||
## Deploy to Kubernetes
|
||||
|
||||
* Write a pipeline that will deploy an "hello world" web app to Kubernetes
|
||||
* Write a pipeline that will deploy an "hello world" web app to Kubernete
|
||||
* The CI/CD system (where the pipeline resides) and the Kubernetes cluster should be on separate systems
|
||||
* The web app should be accessible remotely and only with HTTPS
|
||||
|
@ -6,7 +6,7 @@ Note: this exercise can be solved in various ways. The solution described here i
|
||||
2. Deploy Kubernetes on a remote host (minikube can be an easy way to achieve it)
|
||||
3. Create a simple web app or [page](html)
|
||||
|
||||
4. Create Kubernetes [resources](helloworld.yml) - Deployment, Service and Ingress (for HTTPS access)
|
||||
4. Create Kubernetes [resoruces](helloworld.yml) - Deployment, Service and Ingress (for HTTPS access)
|
||||
5. Create an [Ansible inventory](inventory) and insert the address of the Kubernetes cluster
|
||||
6. Write [Ansible playbook](deploy.yml) to deploy the Kubernetes resources and also generate
|
||||
7. Create a [pipeline](Jenkinsfile)
|
||||
|
@ -14,7 +14,7 @@
|
||||
openssl_privatekey:
|
||||
path: /etc/ssl/private/privkey.pem
|
||||
|
||||
- name: generate openssl certificate signing requests
|
||||
- name: generate openssl certficate signing requests
|
||||
openssl_csr:
|
||||
path: /etc/ssl/csr/hello-world.app.csr
|
||||
privatekey_path: /etc/ssl/private/privkey.pem
|
||||
|
@ -822,7 +822,7 @@ Through the use of namespaces and cgroups. Linux kernel has several types of nam
|
||||
|
||||
* namespaces: same as cgroups, namespaces isolate some of the system resources so it's available only for processes in the namespace. Differently from cgroups the focus with namespaces is on resources like mount points, IPC, network, ... and not about memory and CPU as in cgroups
|
||||
|
||||
* SElinux: the access control mechanism used to protect processes. Unfortunately to this date many users don't actually understand SElinux and some turn it off but nonetheless, it's a very important security feature of the Linux kernel, used by container as well
|
||||
* SElinux: the access control mechanism used to protect processes. Unfortunately to this date many users don't actually understand SElinux and some turn it off but nontheless, it's a very important security feature of the Linux kernel, used by container as well
|
||||
|
||||
* Seccomp: similarly to SElinux, it's also a security mechanism, but its focus is on limiting the processes in regards to using system calls and file descriptors
|
||||
</b></details>
|
||||
@ -1224,7 +1224,7 @@ In rootless containers, user namespace appears to be running as root but it does
|
||||
<details>
|
||||
<summary>When running a container, usually a virtual ethernet device is created. To do so, root privileges are required. How is it then managed in rootless containers?</summary><br><b>
|
||||
|
||||
Networking is usually managed by Slirp in rootless containers. Slirp creates a tap device which is also the default route and it creates it in the network namespace of the container. This device's file descriptor passed to the parent who runs it in the default namespace and the default namespace connected to the internet. This enables communication externally and internally.
|
||||
Networking is usually managed by Slirp in rootless containers. Slirp creates a tap device which is also the default route and it creates it in the network namepsace of the container. This device's file descriptor passed to the parent who runs it in the default namespace and the default namespace connected to the internet. This enables communication externally and internally.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
|
@ -1,191 +0,0 @@
|
||||
# Databases
|
||||
|
||||
- [Databases](#databases)
|
||||
- [Exercises](#exercises)
|
||||
- [Questions](#questions)
|
||||
- [SQL](#sql)
|
||||
- [Time Series](#time-series)
|
||||
|
||||
## Exercises
|
||||
|
||||
|Name|Topic|Objective & Instructions|Solution|Comments|
|
||||
|--------|--------|------|----|----|
|
||||
| Message Board Tables | Relational DB Tables | [Exercise](topics/databases/table_for_message_board_system.md) | [Solution](topics/databases/solutions/table_for_message_board_system.md)
|
||||
|
||||
## Questions
|
||||
|
||||
|
||||
<details>
|
||||
<summary>What type of databases are you familiar with?</summary><br><b>
|
||||
|
||||
Relational (SQL)
|
||||
NoSQL
|
||||
Time series
|
||||
</b></details>
|
||||
|
||||
### SQL
|
||||
|
||||
<details>
|
||||
<summary>What is a relational database?</summary><br><b>
|
||||
|
||||
* Data Storage: system to store data in tables
|
||||
* SQL: programming language to manage relational databases
|
||||
* Data Definition Language: a standard syntax to create, alter and delete tables
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What does it mean when a database is ACID compliant?</summary><br>
|
||||
|
||||
ACID stands for Atomicity, Consistency, Isolation, Durability. In order to be ACID compliant, the database must meet each of the four criteria
|
||||
|
||||
**Atomicity** - When a change occurs to the database, it should either succeed or fail as a whole.
|
||||
|
||||
For example, if you were to update a table, the update should completely execute. If it only partially executes, the
|
||||
update is considered failed as a whole, and will not go through - the DB will revert back to it's original
|
||||
state before the update occurred. It should also be mentioned that Atomicity ensures that each
|
||||
transaction is completed as it's own stand alone "unit" - if any part fails, the whole statement fails.
|
||||
|
||||
**Consistency** - any change made to the database should bring it from one valid state into the next.
|
||||
|
||||
For example, if you make a change to the DB, it shouldn't corrupt it. Consistency is upheld by checks and constraints that
|
||||
are pre-defined in the DB. For example, if you tried to change a value from a string to an int when the column
|
||||
should be of datatype string, a consistent DB would not allow this transaction to go through, and the action would
|
||||
not be executed
|
||||
|
||||
**Isolation** - this ensures that a database will never be seen "mid-update" - as multiple transactions are running at
|
||||
the same time, it should still leave the DB in the same state as if the transactions were being run sequentially.
|
||||
|
||||
For example, let's say that 20 other people were making changes to the database at the same time. At the
|
||||
time you executed your query, 15 of the 20 changes had gone through, but 5 were still in progress. You should
|
||||
only see the 15 changes that had completed - you wouldn't see the database mid-update as the change goes through.
|
||||
|
||||
**Durability** - Once a change is committed, it will remain committed regardless of what happens
|
||||
(power failure, system crash, etc.). This means that all completed transactions
|
||||
must be recorded in non-volatile memory.
|
||||
|
||||
Note that SQL is by nature ACID compliant. Certain NoSQL DB's can be ACID compliant depending on
|
||||
how they operate, but as a general rule of thumb, NoSQL DB's are not considered ACID compliant
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>What is sharding?</summary><br><b>
|
||||
|
||||
Sharding is a horizontal partitioning.
|
||||
|
||||
Are you able to explain what is it good for?
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>You find out your database became a bottleneck and users experience issues accessing data. How can you deal with such situation?</summary><br><b>
|
||||
|
||||
Not much information provided as to why it became a bottleneck and what is current architecture, so one general approach could be<br>
|
||||
to reduce the load on your database by moving frequently-accessed data to in-memory structure.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is a connection pool?</summary><br><b>
|
||||
|
||||
Connection Pool is a cache of database connections and the reason it's used is to avoid an overhead of establishing a connection for every query done to a database.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is a connection leak?</summary><br><b>
|
||||
|
||||
A connection leak is a situation where database connection isn't closed after being created and is no longer needed.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is Table Lock?</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Your database performs slowly than usual. More specifically, your queries are taking a lot of time. What would you do?</summary><br><b>
|
||||
|
||||
* Query for running queries and cancel the irrelevant queries
|
||||
* Check for connection leaks (query for running connections and include their IP)
|
||||
* Check for table locks and kill irrelevant locking sessions
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is a Data Warehouse?</summary><br><b>
|
||||
|
||||
"A data warehouse is a subject-oriented, integrated, time-variant and non-volatile collection of data in support of organisation's decision-making process"
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Explain what is a time-series database</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is OLTP (Online transaction processing)?</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is OLAP (Online Analytical Processing)?</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is an index in a database?</summary><br><b>
|
||||
|
||||
A database index is a data structure that improves the speed of operations in a table. Indexes can be created using one or more columns, providing the basis for both rapid random lookups and efficient ordering of access to records.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What data types are there in relational databases?</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Explain Normalization</summary><br><b>
|
||||
|
||||
Data that is used multiple times in a database should be stored once and referenced with a foreign key.<br>
|
||||
This has the clear benefit of ease of maintenance where you need to change a value only in a single place to change it everywhere.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Explain Primary Key and Foreign Key</summary><br><b>
|
||||
|
||||
Primary Key: each row in every table should a unique identifier that represents the row.<br>
|
||||
Foreign Key: a reference to another table's primary key. This allows you to join table together to retrieve all the information you need without duplicating data.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What types of data tables have you used?</summary><br><b>
|
||||
|
||||
* Primary data table: main data you care about
|
||||
* Details table: includes a foreign key and has one to many relationship
|
||||
* Lookup values table: can be one table per lookup or a table containing all the lookups and has one to many relationship
|
||||
* Multi reference table
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is ORM? What benefits it provides in regards to relational databases usage?</summary><br><b>
|
||||
|
||||
[Wikipedia](https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapping): "is a programming technique for converting data between incompatible type systems using object-oriented programming languages"
|
||||
|
||||
In regards to the relational databases:
|
||||
|
||||
* Database as code
|
||||
* Database abstraction
|
||||
* Encapsulates SQL complexity
|
||||
* Enables code review process
|
||||
* Enables usage as a native OOP structure
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is DDL?</summary><br><b>
|
||||
|
||||
[Wikipedia](https://en.wikipedia.org/wiki/Data_definition_language): "In the context of SQL, data definition or data description language (DDL) is a syntax for creating and modifying database objects such as tables, indices, and users."
|
||||
</b></details>
|
||||
|
||||
### Time Series
|
||||
|
||||
<details>
|
||||
<summary>What is Time Series database?</summary><br><b>
|
||||
|
||||
A database designed specifically for time series based data.
|
||||
|
||||
It comes with multiple optimizations:
|
||||
|
||||
<TODO>: complete this :)
|
||||
</b></details>
|
@ -270,7 +270,7 @@ We can understand web servers using two view points, which is:
|
||||
## How communication between web server and web browsers established:
|
||||
|
||||
Whenever a browser needs a file that is hosted on a web server, the browser requests the page from the web server and the web server responds with that page.
|
||||
This communication between web browser and web server happens in the following ways:
|
||||
This communcation between web browser and web server happens in the following ways:
|
||||
|
||||
(1) User enters the domain name in the browser,and the browser then search for the IP address of the entered name. It can be done in 2 ways-
|
||||
|
||||
@ -393,11 +393,11 @@ This situation might lead to bugs which hard to identify and reproduce.
|
||||
<details>
|
||||
<summary>Explain Declarative and Procedural styles. The technologies you are familiar with (or using) are using procedural or declarative style?</summary><br><b>
|
||||
|
||||
Declarative - You write code that specifies the desired end state<br>
|
||||
Declarative - You write code that specifies the desired end state
|
||||
Procedural - You describe the steps to get to the desired end state
|
||||
|
||||
Declarative Tools - Terraform, Puppet, CloudFormation, Ansible<br>
|
||||
Procedural Tools - Chef
|
||||
Declarative Tools - Terraform, Puppet, CloudFormation
|
||||
Procedural Tools - Ansible, Chef
|
||||
|
||||
To better emphasize the difference, consider creating two virtual instances/servers.
|
||||
In declarative style, you would specify two servers and the tool will figure out how to reach that state.
|
||||
@ -455,7 +455,7 @@ A repository that doesn't holds the application source code, but the configurati
|
||||
One might say we need more details as to what these configuration and infra files look like exactly and how complex the application and its CI/CD pipeline(s), but in general, most of the time you will want to put configuration and infra related files in their own separate repository and not in the repository of the application for multiple reasons:
|
||||
|
||||
* Every change submitted to the configuration, shouldn't trigger the CI/CD of the application, it should be testing out and applying the modified configuration, not the application itself
|
||||
* When you mix application code with configuration and infra related files
|
||||
* When you mix application code with conifguration and infra related files
|
||||
</b></details>
|
||||
|
||||
#### SRE
|
||||
@ -506,31 +506,3 @@ Google: "Monitoring is one of the primary means by which service owners keep tra
|
||||
|
||||
Read more about it [here](https://sre.google/sre-book/introduction)
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What are the two main SRE KPIs</summary><br><b>
|
||||
|
||||
Service Level Indicators (SLI) and Service Level Objectives (SLO).
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is Toil?</summary><br><b>
|
||||
|
||||
Google: Toil is the kind of work tied to running a production service that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that scales linearly as a service grows
|
||||
|
||||
Read more about it [here](https://sre.google/sre-book/eliminating-toil/)
|
||||
</b></details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>What is a postmortem ? </summary><br><b>
|
||||
|
||||
The postmortem is a process that should take place following an incident. It’s purpose is to identify the root cause of an incident and the actions that should be taken to avoid this kind of incidents from happening again. </b></details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>What is the core value often put forward when talking about postmortem?</summary><br><b>
|
||||
|
||||
Blamelessness.
|
||||
Postmortems need to be blameless and this value should be remided at the beginning of every postmortem. This is the best way to ensure that people are playing the game to find the root cause and not trying to hide their possible faults.</b></details>
|
||||
|
||||
|
@ -89,7 +89,6 @@ A mapping between domain name and an IP address.
|
||||
<summary>What types of DNS records are there?</summary><br><b>
|
||||
|
||||
* A
|
||||
* CNAME
|
||||
* PTR
|
||||
* MX
|
||||
* AAAA
|
||||
@ -162,29 +161,8 @@ True.
|
||||
|
||||
<details>
|
||||
<summary>Which techniques a DNS can use for load balancing?</summary><br><b>
|
||||
There are several techniques that a DNS can use for load balancing, including:
|
||||
|
||||
* Round-robin DNS
|
||||
|
||||
* Weighted round-robin DNS
|
||||
|
||||
* Least connections
|
||||
|
||||
* GeoDNS
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is a DNS zone?</summary><br><b>
|
||||
A DNS zone is a logical container that holds all the DNS resource records for a specific domain name.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What types of zones are there?</summary><br><b>
|
||||
There are several types, including:
|
||||
|
||||
* Primary zone: A primary zone is a read/write zone that is stored in a master DNS server.
|
||||
|
||||
* Secondary zone: A secondary zone is a read-only copy of a primary zone that is stored in a slave DNS server.
|
||||
|
||||
* Stub zone: A stub zone is a type of zone that contains only the essential information about a domain name. It is used to reduce the amount of DNS traffic and improve the efficiency of the DNS resolution process.
|
||||
<summary>What is a zone? What types of zones are there?</summary><br><b>
|
||||
</b></details>
|
@ -88,7 +88,7 @@ False. You can see [here](https://cloud.google.com/about/locations) which produc
|
||||
Organization
|
||||
Folder
|
||||
Project
|
||||
Resources
|
||||
Resoruces
|
||||
|
||||
* Organizations - Company
|
||||
* Folder - usually for departments, teams, products, etc.
|
||||
@ -195,7 +195,7 @@ While labels don't affect the resources on which they are applied, network tags
|
||||
<details>
|
||||
<summary>Tell me what do you know about GCP networking</summary><br><b>
|
||||
|
||||
Virtual Private Cloud(VPC) network is a virtual version of physical network, implemented in Google's internal Network. VPC is a global resource in GCP.
|
||||
Virtual Private Cloud(VPC) network is a virtual version of physical network, implemented in Google's internal Network. VPC is a gloabal resource in GCP.
|
||||
Subnetworks(subnets) are regional resources, ie., subnets can be created withinin regions.
|
||||
|
||||
VPC are created in 2 modes,
|
||||
@ -290,7 +290,7 @@ It is a set of tools to help developers write, run and debug GCP kubernetes base
|
||||
It is a managed application platform for organisations like enterprises that require quick modernisation and certain levels
|
||||
of consistency for their legacy applications in a hybrid or multicloud world. From this explanation the core ideas can be drawn from these statements;
|
||||
|
||||
* Managed -> the customer does not need to worry about the underlying software integrations, they just enable the API.
|
||||
* Managed -> the customer does not need to worry about the underlying software intergrations, they just enable the API.
|
||||
* application platform -> It consists of open source tools like K8s, Knative, Istio and Tekton
|
||||
* Enterprises -> these are usually organisations with complex needs
|
||||
* Consistency -> to have the same policies declaratively initiated to be run anywhere securely e.g on-prem, GCP or other-clouds (AWS or Azure)
|
||||
@ -344,7 +344,7 @@ instances in the project.
|
||||
* Node security - By default workloads are provisioned on Compute engine instances that use Google's Container Optimised OS. This operating system implements a locked-down firewall, limited user accounts with root disabled and a read-only filesystem. There is a further option to enable GKE Sandbox for stronger isolation in multi-tenant deployment scenarios.
|
||||
* Network security - Within a created cluster VPC, Anthos GKE leverages a powerful software-defined network that enables simple Pod-to-Pod communications. Network policies allow locking down ingress and egress connections in a given namespace. Filtering can also be implemented to incoming load-balanced traffic for services that require external access, by supplying whitelisted CIDR IP ranges.
|
||||
* Workload security - Running workloads run with limited privileges, default Docker AppArmor security policies are applied to all Kubernetes Pods. Workload identity for Anthos GKE aligns with the open source kubernetes service accounts with GCP service account permissions.
|
||||
* Audit logging - Administrators are given a way to retain, query, process and alert on events of the deployed environments.
|
||||
* Audit logging - Adminstrators are given a way to retain, query, process and alert on events of the deployed environments.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -399,7 +399,7 @@ It follows common modern software development practices which makes cluster conf
|
||||
|
||||
<details>
|
||||
<summary>How does Anthos Service Mesh help?</summary><br><b>
|
||||
Tool and technology integration that makes up Anthos service mesh delivers significant operational benefits to Anthos environments, with minimal additional overhead such as follows:
|
||||
Tool and technology integration that makes up Anthos service mesh delivers signficant operational benefits to Anthos environments, with minimal additional overhead such as follows:
|
||||
|
||||
* Uniform observability - the data plane reports service to service communication back to the control plane generating a service dependency graph. Traffic inspection by the proxy inserts headers to facilitate distributed tracing, capturing and reporting service logs together with service-level metrics (i.e latency, errors, availability).
|
||||
* Operational agility - fine-grained controls for managing the flow of inter-mesh (north-south) and intra-mesh (east-west) traffic are provided.
|
||||
|
@ -3,7 +3,7 @@
|
||||
## Exercises
|
||||
|
||||
|Name|Topic|Objective & Instructions|Solution|Comments|
|
||||
| ----------------- | ------ | -------------------------------- | ------------------------------------------- | -------- |
|
||||
|--------|--------|------|----|----|
|
||||
| My first Commit | Commit | [Exercise](commit_01.md) | [Solution](solutions/commit_01_solution.md) | |
|
||||
| Time to Branch | Branch | [Exercise](branch_01.md) | [Solution](solutions/branch_01_solution.md) | |
|
||||
| Squashing Commits | Commit | [Exercise](squashing_commits.md) | [Solution](solutions/squashing_commits.md) | |
|
||||
@ -14,27 +14,21 @@
|
||||
|
||||
<details>
|
||||
<summary>How do you know if a certain directory is a git repository?</summary><br><b>
|
||||
|
||||
You can check if there is a ".git" directory.
|
||||
</b>
|
||||
</details>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Explain the following: <code>git directory</code>, <code>working directory</code> and <code>staging area</code></summary><br>
|
||||
<b>
|
||||
<summary>Explain the following: <code>git directory</code>, <code>working directory</code> and <code>staging area</code></summary><br><b>
|
||||
|
||||
This answer taken from [git-scm.com](https://git-scm.com/book/en/v1/Getting-Started-Git-Basics#_the_three_states)
|
||||
|
||||
"The Git directory is where Git stores the meta-data and object database for your project. This is the most important
|
||||
part of Git, and it is what is copied when you clone a repository from another computer.
|
||||
"The Git directory is where Git stores the meta data and object database for your project. This is the most important part of Git, and it is what is copied when you clone a repository from another computer.
|
||||
|
||||
The working directory is a single checkout of one version of the project. These files are pulled out of the compressed
|
||||
database in the Git directory and placed on disk for you to use or modify.
|
||||
The working directory is a single checkout of one version of the project. These files are pulled out of the compressed database in the Git directory and placed on disk for you to use or modify.
|
||||
|
||||
The staging area is a simple file, generally contained in your Git directory, that stores information about what will go
|
||||
into your next commit. It’s sometimes referred to as the index, but it’s becoming standard to refer to it as the staging
|
||||
area."
|
||||
</b>
|
||||
</details>
|
||||
The staging area is a simple file, generally contained in your Git directory, that stores information about what will go into your next commit. It’s sometimes referred to as the index, but it’s becoming standard to refer to it as the staging area."
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is the difference between <code>git pull</code> and <code>git fetch</code>?</summary><br><b>
|
||||
@ -52,22 +46,20 @@ a separate branch in your local repository
|
||||
<summary>How to check if a file is tracked and if not, then track it?</summary><br><b>
|
||||
|
||||
There are different ways to check whether a file is tracked or not:
|
||||
|
||||
- `git ls-files <file>` -> exit code of 0 means it's tracked
|
||||
- `git blame <file>`
|
||||
...
|
||||
</b>
|
||||
</details>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Explain what the file <code>gitignore</code> is used for</summary><br><b>
|
||||
The purpose of <code>gitignore</code> files is to ensure that certain files not tracked by Git remain untracked. To stop tracking a file that is currently tracked, use git rm --cached.
|
||||
</b>
|
||||
</details>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>How can you see which changes have done before committing them?</summary><br><b>
|
||||
`git diff`
|
||||
|
||||
`git diff`
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -80,8 +72,7 @@ The purpose of <code>gitignore</code> files is to ensure that certain files not
|
||||
<summary>You've created new files in your repository. How to make sure Git tracks them?</summary><br><b>
|
||||
|
||||
`git add FILES`
|
||||
</b>
|
||||
</details>
|
||||
</b></details>
|
||||
|
||||
### Scenarios
|
||||
|
||||
@ -122,10 +113,10 @@ Finally, with certain build systems, you can know which files are being used/rel
|
||||
<details>
|
||||
<summary>What's is the branch strategy (flow) you know?</summary><br><b>
|
||||
|
||||
- Git flow
|
||||
- GitHub flow
|
||||
- Trunk based development
|
||||
- GitLab flow
|
||||
* Git flow
|
||||
* GitHub flow
|
||||
* Trunk based development
|
||||
* GitLab flow
|
||||
|
||||
[Explanation](https://www.bmc.com/blogs/devops-branching-strategies/#:~:text=What%20is%20a%20branching%20strategy,used%20in%20the%20development%20process).
|
||||
|
||||
@ -139,15 +130,13 @@ True
|
||||
|
||||
<details>
|
||||
<summary>You have two branches - main and devel. How do you make sure devel is in sync with main?</summary><br><b>
|
||||
<code>
|
||||
|
||||
```
|
||||
git checkout main
|
||||
git pull
|
||||
git checkout devel
|
||||
git merge main
|
||||
```
|
||||
</code>
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -165,7 +154,7 @@ Using the HEAD file: `.git/HEAD`
|
||||
<details>
|
||||
<summary>What <code>unstaged</code> means in regards to Git?</summary><br><b>
|
||||
|
||||
A file that is in the working directory but is not in the HEAD nor in the staging area is referred to as "unstaged".
|
||||
A file the is in the working directory but is not in the HEAD nor in the staging area, referred to as "unstaged".
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -179,12 +168,9 @@ True
|
||||
<details>
|
||||
<summary>You have two branches - main and devel. How do you merge devel into main?</summary><br><b>
|
||||
|
||||
```
|
||||
git checkout main
|
||||
git merge devel
|
||||
git push origin main
|
||||
```
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -217,8 +203,8 @@ This page explains it the best: https://git-scm.com/docs/merge-strategies
|
||||
|
||||
Probably good to mention that it's:
|
||||
|
||||
- It's good for cases of merging more than one branch (and also the default of such use cases)
|
||||
- It's primarily meant for bundling topic branches together
|
||||
* It's good for cases of merging more than one branch (and also the default of such use cases)
|
||||
* It's primarily meant for bundling topic branches together
|
||||
|
||||
This is a great article about Octopus merge: http://www.freblogg.com/2016/12/git-octopus-merge.html
|
||||
</b></details>
|
||||
@ -232,7 +218,6 @@ This is a great article about Octopus merge: http://www.freblogg.com/2016/12/git
|
||||
|
||||
`git reset` depends on the usage, can modify the index or change the commit which the branch head
|
||||
is currently pointing at.
|
||||
|
||||
</p>
|
||||
</b></details>
|
||||
|
||||
@ -247,7 +232,6 @@ Using the `git rebase` command
|
||||
<details>
|
||||
<summary>In what situations are you using <code>git rebase</code>?</summary><br><b>
|
||||
Suppose a team is working on a `feature` branch that is coming from the `main` branch of the repo. At a point, where the feature development is done, and finally we wish to merge the feature branch into the main branch without keeping the history of the commits made in the feature branch, a `git rebase` will be helpful.
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -256,7 +240,6 @@ Suppose a team is working on a `feature` branch that is coming from the `main` b
|
||||
```
|
||||
git checkout HEAD~1 -- /path/of/the/file
|
||||
```
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -267,14 +250,15 @@ git checkout HEAD~1 -- /path/of/the/file
|
||||
<summary>What is the <code>.git</code> directory? What can you find there?</summary><br><b>
|
||||
The <code>.git</code> folder contains all the information that is necessary for your project in version control and all the information about commits, remote repository address, etc. All of them are present in this folder. It also contains a log that stores your commit history so that you can roll back to history.
|
||||
|
||||
|
||||
This info copied from [https://stackoverflow.com/questions/29217859/what-is-the-git-folder](https://stackoverflow.com/questions/29217859/what-is-the-git-folder)
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What are some Git anti-patterns? Things that you shouldn't do</summary><br><b>
|
||||
|
||||
- Not waiting too long between commits
|
||||
- Not removing the .git directory :)
|
||||
* Not waiting too long between commits
|
||||
* Not removing the .git directory :)
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -357,13 +341,13 @@ Shortly, it runs `git diff` twice:
|
||||
|
||||
One reason is about the structure of the index, commits, etc.
|
||||
|
||||
- Every file in a commit is stored in tree object
|
||||
- The index is then a flattened structure of tree objects
|
||||
- All files in the index have pre-computed hashes
|
||||
- The diff operation then, is comparing the hashes
|
||||
* Every file in a commit is stored in tree object
|
||||
* The index is then a flattened structure of tree objects
|
||||
* All files in the index have pre-computed hashes
|
||||
* The diff operation then, is comparing the hashes
|
||||
|
||||
Another reason is caching
|
||||
|
||||
- Index caches information on working directory
|
||||
- When Git has the information for certain file cached, there is no need to look at the working directory file
|
||||
* Index caches information on working directory
|
||||
* When Git has the information for certain file cached, there is no need to look at the working directory file
|
||||
</b></details>
|
||||
|
@ -44,23 +44,3 @@ An application that publishes data to the Kafka cluster.
|
||||
|
||||
- Broker: a server with kafka process running on it. Such server has local storage. In a single Kafka clusters there are usually multiple brokers.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is the role of ZooKeeper is Kafka?</summary><br/><b>
|
||||
In Kafka, Zookeeper is a centralized controller that manages metadata for producers, brokers, and consumers.
|
||||
Zookeeper also:
|
||||
<ul>
|
||||
<li>Tracks which brokers are part of the Kafka cluster</li>
|
||||
<li>
|
||||
Determines which broker is the leader of a given partition and topic
|
||||
</li>
|
||||
<li>
|
||||
Performs leader elections
|
||||
</li>
|
||||
<li>
|
||||
Manages cluster membership of brokers
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
</b>
|
||||
</details>
|
||||
|
@ -22,7 +22,7 @@
|
||||
|
||||
## Setup
|
||||
|
||||
* Set up Kubernetes cluster. Use one of the following
|
||||
* Set up Kubernetes cluster. Use on of the following
|
||||
1. Minikube for local free & simple cluster
|
||||
2. Managed Cluster (EKS, GKE, AKS)
|
||||
|
||||
@ -54,7 +54,7 @@ Note: create an alias (`alias k=kubectl`) and get used to `k get po`
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Assuming that you have a Pod called "nginx-test", how to remove it?</summary><br><b>
|
||||
<summary>Assuming you have a Pod called "nginx-test", how to remove it?</summary><br><b>
|
||||
|
||||
`k delete nginx-test`
|
||||
</b></details>
|
||||
@ -107,7 +107,7 @@ If you ask yourself how would I remember writing all of that? no worries, you ca
|
||||
<details>
|
||||
<summary>How to test a manifest is valid?</summary><br><b>
|
||||
|
||||
with `--dry-run` flag which will not actually create it, but it will test it and you can find this way, any syntax issues.
|
||||
with `--dry-run` flag which will not actually create it, but it will test it and you can find this way any syntax issues.
|
||||
|
||||
`k create -f YAML_FILE --dry-run`
|
||||
</b></details>
|
||||
@ -119,7 +119,7 @@ with `--dry-run` flag which will not actually create it, but it will test it and
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>How to check how many containers run in single Pod?</summary><br><b>
|
||||
<summary>How to check how many containers run in signle Pod?</summary><br><b>
|
||||
|
||||
`k get po POD_NAME` and see the number under "READY" column.
|
||||
|
||||
@ -158,11 +158,7 @@ To count them: `k get po -l env=prod --no-headers | wc -l`
|
||||
First change to the directory tracked by kubelet for creating static pod: `cd /etc/kubernetes/manifests` (you can verify path by reading kubelet conf file)
|
||||
|
||||
Now create the definition/manifest in that directory
|
||||
|
||||
`k run some-pod --image=python --command sleep 2017 --restart=Never --dry-run=client -o yaml > status-pod.yaml`
|
||||
=======
|
||||
`k run some-pod --image=python --command sleep 2017 --restart=Never --dry-run=client -o yaml > static-pod.yaml`
|
||||
|
||||
`k run some-pod --image=python --command sleep 2017 --restart=Never --dry-run=client -o yaml > statuc-pod.yaml`
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -182,7 +178,7 @@ Go to that directory and remove the manifest/definition of the staic Pod (`rm <S
|
||||
The container failed to run (due to different reasons) and Kubernetes tries to run the Pod again after some delay (= BackOff time).
|
||||
|
||||
Some reasons for it to fail:
|
||||
- Misconfiguration - misspelling, non supported value, etc.
|
||||
- Misconfiguration - mispelling, non supported value, etc.
|
||||
- Resource not available - nodes are down, PV not mounted, etc.
|
||||
|
||||
Some ways to debug:
|
||||
@ -308,8 +304,6 @@ Note: create an alias (`alias k=kubectl`) and get used to `k get no`
|
||||
|
||||
<details>
|
||||
<summary>Create an internal service called "sevi" to expose the app 'web' on port 1991</summary><br><b>
|
||||
|
||||
`kubectl expose pod web --port=1991 --name=sevi`
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -498,7 +492,9 @@ The selector doesn't match the label (cache vs cachy). To solve it, fix cachy so
|
||||
<details>
|
||||
<summary>Create a deployment called "pluck" using the image "redis" and make sure it runs 5 replicas</summary><br><b>
|
||||
|
||||
`kubectl create deployment pluck --image=redis --replicas=5`
|
||||
`kubectl create deployment pluck --image=redis`
|
||||
|
||||
`kubectl scale deployment pluck --replicas=5`
|
||||
|
||||
</b></details>
|
||||
|
||||
@ -687,7 +683,7 @@ affinity:
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Create and run a Pod called `some-pod` with the image `redis` and configure it to use the selector `hw=max`</summary><br><b>
|
||||
<summary>reate and run a Pod called `some-pod` with the image `redis` and configure it to use the selector `hw=max`</summary><br><b>
|
||||
|
||||
```
|
||||
kubectl run some-pod --image=redis --dry-run=client -o yaml > pod.yaml
|
||||
@ -853,7 +849,7 @@ Running `kubectl get events` you can see which scheduler was used.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>You want to run a new Pod and you would like it to be scheduled by a custom scheduler. How to achieve it?</summary><br><b>
|
||||
<summary>You want to run a new Pod and you would like it to be scheduled by a custom schduler. How to achieve it?</summary><br><b>
|
||||
|
||||
Add the following to the spec of the Pod:
|
||||
|
||||
|
@ -9,7 +9,7 @@ What's your goal?
|
||||
* I would like to learn Kubernetes by practicing both theoritcal and practical material
|
||||
* Solve [exercises](#kubernetes-exercises)
|
||||
* Solve [questions](#kubernetes-questions)
|
||||
* I would like to learn practical Kubernetes
|
||||
* I would like to learn parctical Kubernetes
|
||||
* Solve [exercises](#kubernetes-exercises)
|
||||
|
||||
- [Kubernetes](#kubernetes)
|
||||
@ -314,7 +314,6 @@ Outputs the status of each of the control plane components.
|
||||
<details>
|
||||
<summary>What happens to running pods if if you stop Kubelet on the worker nodes?</summary><br><b>
|
||||
|
||||
When you stop the kubelet service on a worker node, it will no longer be able to communicate with the Kubernetes API server. As a result, the node will be marked as NotReady and the pods running on that node will be marked as Unknown. The Kubernetes control plane will then attempt to reschedule the pods to other available nodes in the cluster.
|
||||
</b></details>
|
||||
|
||||
#### Nodes Commands
|
||||
@ -737,29 +736,21 @@ A Deployment is a declarative statement for the desired state for Pods and Repli
|
||||
<details>
|
||||
<summary>How to create a deployment with the image "nginx:alpine"?</code></summary><br><b>
|
||||
|
||||
`kubectl create deployment my-first-deployment --image=nginx:alpine`
|
||||
`kubectl create deployment my_first_deployment --image=nginx:alpine`
|
||||
|
||||
OR
|
||||
|
||||
```
|
||||
cat << EOF | kubectl create -f -
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:alpine
|
||||
EOF
|
||||
```
|
||||
</b></details>
|
||||
|
||||
@ -1182,7 +1173,7 @@ Explanation as to who added them:
|
||||
<details>
|
||||
<summary>After creating a service that forwards incoming external traffic to the containerized application, how to make sure it works?</summary><br><b>
|
||||
|
||||
You can run `curl <SERVICE IP>:<SERVICE PORT>` to examine the output.
|
||||
You can run `curl <SERIVCE IP>:<SERVICE PORT>` to examine the output.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -1325,7 +1316,7 @@ To run two instances of the applicaation?
|
||||
|
||||
`kubectl scale deployment <DEPLOYMENT_NAME> --replicas=2`
|
||||
|
||||
You can specify any other number, given that your application knows how to scale.
|
||||
You can speciy any other number, given that your application knows how to scale.
|
||||
</b></details>
|
||||
|
||||
### ReplicaSets
|
||||
@ -1800,9 +1791,9 @@ False. When a namespace is deleted, the resources in that namespace are deleted
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>While namespaces do provide scope for resources, they are not isolating them</summary><br><b>
|
||||
<summary>While namspaces do provide scope for resources, they are not isolating them</summary><br><b>
|
||||
|
||||
True. Try create two pods in two separate namespaces for example, and you'll see there is a connection between the two.
|
||||
True. Try create two pods in two separate namspaces for example, and you'll see there is a connection between the two.
|
||||
</b></details>
|
||||
|
||||
#### Namespaces - commands
|
||||
@ -1867,7 +1858,7 @@ If the namespace doesn't exist already: `k create ns dev`
|
||||
<details>
|
||||
<summary>What kube-node-lease contains?</summary><br><b>
|
||||
|
||||
It holds information on heartbeats of nodes. Each node gets an object which holds information about its availability.
|
||||
It holds information on hearbeats of nodes. Each node gets an object which holds information about its availability.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -2358,7 +2349,14 @@ The pod is automatically assigned with the default service account (in the names
|
||||
|
||||
### Patterns
|
||||
|
||||
<details>
|
||||
<summary>Explain the sidecar container pattern</summary><br><b>
|
||||
ion container, there is a sidecar container.
|
||||
</b></details>
|
||||
r, the application would not exist. In addition to the application container, there is a sidecar container.
|
||||
|
||||
In simpler words, when you have a Pod and there is more than one container running in that Pod that supports or complements the application container, it means you use the sidecar pattern.
|
||||
</b></details>
|
||||
|
||||
### CronJob
|
||||
|
||||
@ -2863,7 +2861,7 @@ Running `kubectl get events` you can see which scheduler was used.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>You want to run a new Pod and you would like it to be scheduled by a custom scheduler. How to achieve it?</summary><br><b>
|
||||
<summary>You want to run a new Pod and you would like it to be scheduled by a custom schduler. How to achieve it?</summary><br><b>
|
||||
|
||||
Add the following to the spec of the Pod:
|
||||
|
||||
@ -2921,7 +2919,7 @@ Exit and save. The pod should be in Running state now.
|
||||
|
||||
`NoSchedule`: prevents from resources to be scheduled on a certain node
|
||||
`PreferNoSchedule`: will prefer to shcedule resources on other nodes before resorting to scheduling the resource on the chosen node (on which the taint was applied)
|
||||
`NoExecute`: Applying "NoSchedule" will not evict already running Pods (or other resources) from the node as opposed to "NoExecute" which will evict any already running resource from the Node
|
||||
`NoExecute`: Appling "NoSchedule" will not evict already running Pods (or other resources) from the node as opposed to "NoExecute" which will evict any already running resource from the Node
|
||||
</b></details>
|
||||
|
||||
### Resource Limits
|
||||
@ -3131,7 +3129,7 @@ Namespaces. See the following [namespaces question and answer](#namespaces-use-c
|
||||
The container failed to run (due to different reasons) and Kubernetes tries to run the Pod again after some delay (= BackOff time).
|
||||
|
||||
Some reasons for it to fail:
|
||||
- Misconfiguration - misspelling, non supported value, etc.
|
||||
- Misconfiguration - mispelling, non supported value, etc.
|
||||
- Resource not available - nodes are down, PV not mounted, etc.
|
||||
|
||||
Some ways to debug:
|
||||
|
@ -3,7 +3,7 @@
|
||||
## Requirements
|
||||
|
||||
1. Running Kubernetes cluster
|
||||
2. Kubctl version 1.14 or above
|
||||
2. Kustomize binary installed
|
||||
|
||||
## Objectives
|
||||
|
||||
|
@ -3,7 +3,7 @@
|
||||
## Requirements
|
||||
|
||||
1. Running Kubernetes cluster
|
||||
2. Kubctl version 1.14 or above
|
||||
2. Kustomize binary installed
|
||||
|
||||
## Objectives
|
||||
|
||||
@ -28,4 +28,4 @@ resources:
|
||||
- deployment.yml
|
||||
```
|
||||
|
||||
2. Run `kubectl apply -k someApp`
|
||||
2. Run `kustomize build someApp`
|
@ -9,5 +9,4 @@
|
||||
|
||||
## After you complete the exercise
|
||||
|
||||
|
||||
* Why did the "RESTARTS" count raised? - `Kubernetes restarted the Pod because we killed the process and the container was not running properly.`
|
||||
* Why did the "RESTARTS" count raised? - `because we killed the process and Kubernetes identified the container isn't running proprely so it performed restart to the Pod`
|
||||
|
@ -39,7 +39,7 @@ kubectl get rs
|
||||
3. Remove the ReplicaSet but NOT the pods it created
|
||||
|
||||
```
|
||||
kubectl delete -f rs.yaml --cascade=orphan
|
||||
kubectl delete -f rs.yaml --cascade=false
|
||||
```
|
||||
|
||||
4. Verify you've deleted the ReplicaSet but the Pods are still running
|
||||
|
@ -51,7 +51,7 @@ kubectl label pod <POD_NAME> type-
|
||||
5. List the Pods running. Are there more Pods running after removing the label? Why?
|
||||
|
||||
```
|
||||
Yes, there is an additional Pod running because once the label (used as a matching selector) was removed, the Pod became independent meaning, it's not controlled by the ReplicaSet anymore and the ReplicaSet was missing replicas based on its definition so, it created a new Pod.
|
||||
Yes, there is an additional Pod running because once the label (used as a matching selector) was removed, the Pod became independant meaning, it's not controlled by the ReplicaSet anymore and the ReplicaSet was missing replicas based on its definition so, it created a new Pod.
|
||||
```
|
||||
|
||||
6. Verify the ReplicaSet indeed created a new Pod
|
||||
|
@ -3,7 +3,7 @@
|
||||
## Linux Master Application
|
||||
|
||||
A completely free application for testing your knowledge on Linux.
|
||||
Disclaimer: developed by repository owner
|
||||
Desclaimer: developed by repository owner
|
||||
|
||||
<a href="https://play.google.com/store/apps/details?id=com.codingshell.linuxmaster"><img src="../../images/linux_master.jpeg"/></a>
|
||||
|
||||
@ -169,9 +169,6 @@ They take in input (<) and output for a given file (>) using stdin and stdout.
|
||||
- cut: a tool for cutting out selected portions of each line of a file:
|
||||
- syntax: `cut OPTION [FILE]`
|
||||
- cutting first two bytes from a word in a file: `cut -b 1-2 file.md`, output: `wo`
|
||||
- awk: a programming language that is mainly used for text processing and data extraction. It can be used to manipulate and modify text in a file:
|
||||
- syntax: awk [OPTIONS] [FILTER] [FILE]
|
||||
extracting a specific field from a CSV file: awk -F ',' '{print $1}' file.csv, output: first field of each line in the file
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -238,7 +235,6 @@ find . -iname "*.yaml" -exec sed -i "s/1/2/g" {} \;
|
||||
<summary>How to check which commands you executed in the past?</summary><br><b>
|
||||
|
||||
history command or .bash_history file
|
||||
* also can use up arrow key to access or to show the recent commands you type
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -281,37 +277,24 @@ Alternatively if you are using a distro with systemd it's recommended to use sys
|
||||
|
||||
<details>
|
||||
<summary>Explain Linux I/O redirection</summary><br><b>
|
||||
In Linux, IO redirection is a way of changing the default input/output behavior of a command or program. It allows you to redirect input and output from/to different sources/destinations, such as files, devices, and other commands.
|
||||
|
||||
Here are some common examples of IO redirection:
|
||||
* Redirecting Standard Output (stdout):
|
||||
<code>ls > filelist.txt</code>
|
||||
* Redirecting Standard Error (stderr):
|
||||
<code>ls /some/nonexistent/directory 2> error.txt</code>
|
||||
* Appending to a file:
|
||||
<code>echo "hello" >> myfile.txt</code>
|
||||
* Redirecting Input (stdin):
|
||||
<code>sort < unsorted.txt</code>
|
||||
* Using Pipes: Pipes ("|"):
|
||||
<code>ls | grep "\.txt$"</code>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Demonstrate Linux output redirection</summary><br><b>
|
||||
|
||||
<code>ls > ls_output.txt</code>
|
||||
ls > ls_output.txt
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Demonstrate Linux stderr output redirection</summary><br><b>
|
||||
|
||||
<code>yippiekaiyay 2> ls_output.txt</code>
|
||||
yippiekaiyay 2> ls_output.txt
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Demonstrate Linux stderr to stdout redirection</summary><br><b>
|
||||
|
||||
<code>yippiekaiyay &> file</code>
|
||||
yippiekaiyay &> file
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -376,7 +359,6 @@ The command passed to the boot loader to run the kernel
|
||||
|
||||
<details>
|
||||
<summary>In which path can you find the system devices (e.g. block storage)?</summary><br><b>
|
||||
/dev
|
||||
</b></details>
|
||||
|
||||
<a name="questions-linux-permissions"></a>
|
||||
@ -1078,7 +1060,7 @@ sar -n TCP,ETCP 1
|
||||
<details>
|
||||
<summary>how to list all the processes running in your system?</summary><br><b>
|
||||
|
||||
The "ps" command can be used to list all the processes running in a system. The "ps aux" command provides a detailed list of all the processes, including the ones running in the background.
|
||||
`ps -ef`
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -1121,28 +1103,22 @@ To view all available signals run `kill -l`
|
||||
|
||||
<details>
|
||||
<summary>What <code>kill 0</code> does?</summary><br><b>
|
||||
"kill 0" sends a signal to all processes in the current process group. It is used to check if the processes exist or not
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What <code>kill -0 <PID></code> does?</summary><br><b>
|
||||
"kill -0" checks if a process with a given process ID exists or not. It does not actually send any signal to the process.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is a trap?</summary><br><b>
|
||||
A trap is a mechanism that allows the shell to intercept signals sent to a process and perform a specific action, such as handling errors or cleaning up resources before terminating the process.
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Every couple of days, a certain process stops running. How can you look into why it's happening?</summary><br><b>
|
||||
One way to investigate why a process stops running is to check the system logs, such as the messages in /var/log/messages or journalctl. Additionally, checking the process's resource usage and system load may provide clues as to what caused the process to stop
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What happens when you press ctrl + c?</summary><br><b>
|
||||
When you press "Ctrl+C," it sends the SIGINT signal to the foreground process, asking it to terminate gracefully.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -1166,7 +1142,6 @@ Zombie (z)
|
||||
|
||||
<details>
|
||||
<summary>How do you kill a process in D state?</summary><br><b>
|
||||
A process in D state (also known as "uninterruptible sleep") cannot be killed using the "kill" command. The only way to terminate it is to reboot the system.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -1195,7 +1170,7 @@ You can also try closing/terminating the parent process. This will make the zomb
|
||||
* Zombie Processes
|
||||
</summary><br><b>
|
||||
|
||||
If you mention at any point ps command with arguments, be familiar with what these arguments does exactly.
|
||||
If you mention at any point ps command with arugments, be familiar with what these arguments does exactly.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -1209,24 +1184,14 @@ It is the first process executed by the kernel during the booting of a system. I
|
||||
|
||||
<details>
|
||||
<summary>How to change the priority of a process? Why would you want to do that?</summary><br><b>
|
||||
To change the priority of a process, you can use the nice command in Linux. The nice command allows you to specify the priority of a process by assigning a priority value ranging from -20 to 19. A higher value of priority means lower priority for the process, and vice versa.
|
||||
|
||||
You may want to change the priority of a process to adjust the amount of CPU time it is allocated by the system scheduler. For example, if you have a CPU-intensive process running on your system that is slowing down other processes, you can lower its priority to give more CPU time to other processes.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Can you explain how network process/connection is established and how it's terminated?></summary><br></b>
|
||||
When a client process on one system wants to establish a connection with a server process on another system, it first creates a socket using the socket system call. The client then calls the connect system call, passing the address of the server as an argument. This causes a three-way handshake to occur between the client and server, where the two systems exchange information to establish a connection.
|
||||
|
||||
Once the connection is established, the client and server can exchange data using the read and write system calls. When the connection is no longer needed, the client or server can terminate the connection by calling the close system call on the socket.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What <code>strace</code> does? What about <code>ltrace</code>?</summary><br><b>
|
||||
Strace is a debugging tool that is used to monitor the system calls made by a process. It allows you to trace the execution of a process and see the system calls it makes, as well as the signals it receives. This can be useful for diagnosing issues with a process, such as identifying why it is hanging or crashing.
|
||||
|
||||
Ltrace, on the other hand, is a similar tool that is used to trace the library calls made by a process. It allows you to see the function calls made by a process to shared libraries, as well as the arguments passed to those functions. This can be useful for diagnosing issues with a process that involve library calls, such as identifying why a particular library is causing a problem.
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -1649,7 +1614,7 @@ There are 2 configuration files, which stores users information
|
||||
<details>
|
||||
<summary>Which file stores users passwords? Is it visible for everyone?</summary><br>
|
||||
|
||||
`/etc/shadow` file holds the passwords of the users in encryted format. NO, it is only visible to the `root` user
|
||||
`/etc/shadow` file holds the passwords of the users in encryted format. NO, it is only visble to the `root` user
|
||||
</details>
|
||||
|
||||
<details>
|
||||
@ -1980,7 +1945,7 @@ Given the name of an executable and some arguments, it loads the code and static
|
||||
<summary>True or False? A successful call to exec() never returns</summary><br><b>
|
||||
|
||||
True<br>
|
||||
Since a successful exec replace the current process, it can't return anything to the process that made the call.
|
||||
Since a succesful exec replace the current process, it can't return anything to the process that made the call.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -2276,14 +2241,6 @@ It's used in commands to mark the end of commands options. One common example is
|
||||
|
||||
<details>
|
||||
<summary>What is User-mode Linux?</summary><br><b>
|
||||
In Linux, user mode is a restricted operating mode in which a user's application or process runs. User mode is a non-privileged mode that prevents user-level processes from accessing sensitive system resources directly.
|
||||
|
||||
In user mode, an application can only access hardware resources indirectly, by calling system services or functions provided by the operating system. This ensures that the system's security and stability are maintained by preventing user processes from interfering with or damaging system resources.
|
||||
|
||||
Additionally, user mode also provides memory protection to prevent applications from accessing unauthorized memory locations. This is done by assigning each process its own virtual memory space, which is isolated from other processes.
|
||||
|
||||
In contrast to user mode, kernel mode is a privileged operating mode in which the operating system's kernel has full access to system resources, and can perform low-level operations, such as accessing hardware devices and managing system resources directly.
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
|
@ -17,7 +17,8 @@ touch /tmp/x
|
||||
cp x ~/
|
||||
cp x y
|
||||
mkdir files
|
||||
mv x files | mv y files
|
||||
cp x files
|
||||
cp y files
|
||||
cp -r files copy_of_files
|
||||
mv copy_of_files files2
|
||||
rm -rf files files2
|
||||
|
@ -1,93 +0,0 @@
|
||||
# Observability
|
||||
|
||||
- [Observability](#observability)
|
||||
- [Monitoring](#monitoring)
|
||||
- [Data](#data)
|
||||
- [Application Performance Management](#application-performance-management)
|
||||
|
||||
<details>
|
||||
<summary>What's Observability?</summary><br><b>
|
||||
|
||||
In distributed systems, observability is the ability to collect data about programs' execution, modules' internal states, and the communication among components.<br>
|
||||
To improve observability, software engineers use a wide range of logging and tracing techniques to gather telemetry information, and tools to analyze and use it.<br>
|
||||
Observability is foundational to site reliability engineering, as it is the first step in triaging a service outage.<sup title="wikipedia"><a href="https://en.wikipedia.org/wiki/Observability_(software)">[1]</a></sup>
|
||||
|
||||
</b></details>
|
||||
|
||||
## Monitoring
|
||||
|
||||
<details>
|
||||
<summary>What's monitoring? How is it related to Observability?</summary><br><b>
|
||||
|
||||
Google: "Monitoring is one of the primary means by which service owners keep track of a system’s health and availability".
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What types of monitoring outputs are you familiar with and/or used in the past?</summary><br><b>
|
||||
|
||||
Alerts<br>
|
||||
Tickets<br>
|
||||
Logging<br>
|
||||
</b></details>
|
||||
|
||||
## Data
|
||||
|
||||
<details>
|
||||
<summary>Can you mention what type of things are often montiored in the IT industry?</summary><br><b>
|
||||
|
||||
- Hardware (CPU, RAM, ...)
|
||||
- Infrastructure (Disk capacity, Network latency, ...)
|
||||
- App (Status code, Errors in logs, ...)
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Explain "Time Series" data</summary><br><b>
|
||||
|
||||
Time series data is sequenced data, measuring certain parameter in ordered (by time) way.
|
||||
|
||||
An example would be CPU utilization every hour:
|
||||
|
||||
```
|
||||
08:00 17
|
||||
09:00 22
|
||||
10:00 91
|
||||
```
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Explain data aggregation</summary><br><b>
|
||||
|
||||
In monitoring, aggregating data is basically combining collection of values. It can be done in different ways like taking the average of multiple values, the sum of them, the count of many times they appear in the collection and other ways that mainly depend on the type of the collection (e.g. time-series would be one type).
|
||||
|
||||
</b></details>
|
||||
|
||||
|
||||
## Application Performance Management
|
||||
|
||||
<details>
|
||||
<summary>What is Application Performance Management?</summary><br><b>
|
||||
|
||||
- IT metrics translated into business insights
|
||||
- Practices for monitoring applications insights so we can improve performances, reduce issues and improve overall user experience
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Name three aspects of a project you can monitor with APM (e.g. backend)</summary><br><b>
|
||||
|
||||
- Frontend
|
||||
- Backend
|
||||
- Infra
|
||||
- ...
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What can be collected/monitored to perform APM monitoring?</summary><br><b>
|
||||
|
||||
- Metrics
|
||||
- Logs
|
||||
- Events
|
||||
- Traces
|
||||
|
||||
</b></details>
|
@ -141,7 +141,7 @@ Federation
|
||||
<details>
|
||||
<summary>What is OpenShift Federation?</summary><br><b>
|
||||
|
||||
Management and deployment of services and workloads across multiple independent clusters from a single API
|
||||
Management and deployment of services and workloads accross multiple independent clusters from a single API
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -190,7 +190,7 @@ Master node automatically restarts the pod unless it fails too often.
|
||||
<details>
|
||||
<summary>What happens when a pod fails too often?</summary><br><b>
|
||||
|
||||
It's marked as bad by the master node and temporary not restarted anymore.
|
||||
It's marked as bad by the master node and temporarly not restarted anymore.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
|
@ -1,3 +0,0 @@
|
||||
## Class
|
||||
|
||||
write a simple class that has two attributes of which one has a default value and has two methods
|
@ -1,55 +0,0 @@
|
||||
## Class 0x00 - Solution
|
||||
|
||||
1. write a simple class that has two attributes of which one has a default value and has two methods
|
||||
|
||||
```python
|
||||
from typing import Optional
|
||||
""" Student Module
|
||||
|
||||
"""
|
||||
|
||||
|
||||
class Student:
|
||||
def __init__(self, name: str, department: Optional[str] = None) -> None:
|
||||
""" Instance Initialization function
|
||||
|
||||
Args:
|
||||
name (str): Name of student
|
||||
department (Optional[str], optional): Department. Defaults to None.
|
||||
"""
|
||||
self.name = name
|
||||
self.department = department
|
||||
|
||||
def getdetails(self) -> str:
|
||||
""" Gets the students details
|
||||
|
||||
Returns:
|
||||
str: A formatted string
|
||||
"""
|
||||
return f"Name is {self.name}, I'm in department {self.department}"
|
||||
|
||||
def change_department(self, new_deparment: str) -> None:
|
||||
"""Changes the department of the student object
|
||||
|
||||
Args:
|
||||
new_deparment (str): Assigns the new department value to dept attr
|
||||
"""
|
||||
self.department = new_deparment
|
||||
|
||||
# student1 instantiation
|
||||
student1 = Student("Ayobami", "Statistics")
|
||||
|
||||
print(student1.getdetails())
|
||||
|
||||
# Calling the change_department function to change the department of student
|
||||
student1.change_department("CS")
|
||||
|
||||
print(student1.department)
|
||||
```
|
||||
|
||||
Output
|
||||
|
||||
```
|
||||
Name is Ayobami, I'm in department Statistics
|
||||
CS
|
||||
```
|
@ -1,26 +0,0 @@
|
||||
## Sort Descending - Solution
|
||||
|
||||
1. write a function that sorts the following list of list without using the `sorted()` and `.sort()`
|
||||
function in descending order
|
||||
|
||||
- mat_list = [[1, 2, 3], [2, 4, 4], [5, 5, 5]] -> [[5, 5, 5], [2, 4, 4], [1, 2, 3]]
|
||||
|
||||
```python
|
||||
def sort_desc(mat: list) -> list:
|
||||
""" Sorts a list in descending order
|
||||
|
||||
Args:
|
||||
mat (list): paresd list
|
||||
|
||||
Returns:
|
||||
list: A new list
|
||||
"""
|
||||
new_list = []
|
||||
while mat != []:
|
||||
maxx = max(mat)
|
||||
new_list.append(maxx)
|
||||
mat.remove(maxx)
|
||||
return new_list
|
||||
|
||||
print(sort_func(mat_list))
|
||||
```
|
@ -1,6 +0,0 @@
|
||||
## Sort Descending
|
||||
|
||||
1. write a function that sorts the following list of list without using the `sorted()` and `.sort()`
|
||||
function in descending order
|
||||
|
||||
- list = [[1, 2, 3], [2, 4, 4], [5, 5, 5]] -> [[5, 5, 5], [2, 4, 4], [1, 2, 3]]
|
@ -253,7 +253,6 @@ True. It is only used during the key exchange algorithm of symmetric encryption.
|
||||
Hashing is a mathematical function for mapping data of arbitrary sizes to fixed-size values. This function produces a "digest" of the data that can be used for verifying that the data has not been modified (amongst other uses)
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>How is hashing different from encryption?</summary><br><b>
|
||||
|
||||
Encrypted data can be decrypted to its original value. Hashed data cannot be reversed to view the original data - hashing is a one-way function.
|
||||
@ -340,7 +339,7 @@ The 'S' in HTTPS stands for 'secure'. HTTPS uses TLS to provide encryption of HT
|
||||
[Red Hat](https://www.redhat.com/en/topics/security/what-is-cve#how-does-it-work) : "When someone refers to a CVE (Common Vulnerabilities and Exposures), they mean a security flaw that's been assigned a CVE ID number. They don’t include technical data, or information about risks, impacts, and fixes." So CVE is just identified by an ID written with 8 digits. The CVE ID have the following format: CVE prefix + Year + Arbitrary Digits.
|
||||
Anyone can submit a vulnerability, [Exploit Database](https://www.exploit-db.com/submit) explains how it works to submit.
|
||||
|
||||
Then CVSS stands for Common Vulnerability Scoring System, it attempts to assign severity scores to vulnerabilities, allowing to ordonnance and prioritize responses and resources according to threat.
|
||||
Then CVSS stands for Common Vulnerability Scoring System, it attemps to assign severity scores to vulnerabilities, allowing to ordonnance and prioritize responses and ressources according to threat.
|
||||
|
||||
</b></details>
|
||||
|
||||
@ -395,7 +394,7 @@ Spectre is an attack method which allows a hacker to “read over the shoulder
|
||||
<details>
|
||||
<summary>What is CSRF? How to handle CSRF?</summary><br><b>
|
||||
|
||||
Cross-Site Request Forgery (CSRF) is an attack that makes the end user to initiate a unwanted action on the web application in which the user has a authenticated session, the attacker may user an email and force the end user to click on the link and that then execute malicious actions. When an CSRF attack is successful it will compromise the end user data
|
||||
Cross-Site Request Forgery (CSRF) is an attack that makes the end user to initate a unwanted action on the web application in which the user has a authenticated session, the attacker may user an email and force the end user to click on the link and that then execute malicious actions. When an CSRF attack is successful it will compromise the end user data
|
||||
|
||||
You can use OWASP ZAP to analyze a "request", and if it appears that there no protection against cross-site request forgery when the Security Level is set to 0 (the value of csrf-token is SecurityIsDisabled.) One can use data from this request to prepare a CSRF attack by using OWASP ZAP
|
||||
</b></details>
|
||||
|
@ -14,7 +14,7 @@
|
||||
|Sum|Functions|[Exercise](sum.md)|[Solution](solutions/sum.md) | Basic
|
||||
|Number of Arguments|Case Statement|[Exercise](num_of_args.md)|[Solution](solutions/num_of_args.md) | Basic
|
||||
|Empty Files|Misc|[Exercise](empty_files.md)|[Solution](solutions/empty_files.md) | Basic
|
||||
|Directories Comparison|Misc|[Exercise](directories_comparison.md)|[Solution](solutions/directories_comparison.md) | Basic
|
||||
|Directories Comparison|Misc|[Exercise](directories_comparison.md)| :( | Basic
|
||||
|It's alive!|Misc|[Exercise](host_status.md)|[Solution](solutions/host_status.md) | Intermediate
|
||||
|
||||
## Shell Scripting - Self Assessment
|
||||
|
@ -7,3 +7,17 @@ Note: assume the script is executed with an argument
|
||||
1. Write a script that will check if a given argument is the string "pizza"
|
||||
1. If it's the string "pizza" print "with pineapple?"
|
||||
2. If it's not the string "pizza" print "I want pizza!"
|
||||
|
||||
### Solution
|
||||
|
||||
```
|
||||
/usr/bin/env bash
|
||||
|
||||
arg_value=${1:-default}
|
||||
|
||||
if [ $arg_value = "pizza" ]; then
|
||||
echo "with pineapple?"
|
||||
else
|
||||
echo "I want pizza!"
|
||||
fi
|
||||
```
|
||||
|
@ -1,17 +0,0 @@
|
||||
## Argument Check
|
||||
|
||||
### Objectives
|
||||
|
||||
Note: assume the script is executed with an argument
|
||||
|
||||
1. Write a script that will check if a given argument is the string "pizza"
|
||||
2. If it's the string "pizza" print "with pineapple?"
|
||||
3. If it's not the string "pizza" print "I want pizza!"
|
||||
|
||||
### Solution
|
||||
|
||||
```
|
||||
#!/usr/bin/env bash
|
||||
|
||||
[[ ${1} == "pizza" ]] && echo "with pineapple?" || echo "I want pizza!"
|
||||
```
|
@ -4,7 +4,7 @@
|
||||
|
||||
1. You are given two directories as arguments and the output should be any difference between the two directories
|
||||
|
||||
### Solution 1
|
||||
### Solution
|
||||
|
||||
Suppose the name of the bash script is ```dirdiff.sh```
|
||||
|
||||
@ -26,12 +26,5 @@ then
|
||||
fi
|
||||
|
||||
diff -q $1 $2
|
||||
```
|
||||
|
||||
### Solution 2
|
||||
|
||||
With gnu find, you can use diff to compare directories recursively.
|
||||
|
||||
```shell
|
||||
diff --recursive directory1 directory2
|
||||
|
||||
```
|
@ -41,8 +41,6 @@
|
||||
|
||||
<details>
|
||||
<summary>What programming language do you prefer to use for DevOps related tasks? Why specifically this one?</summary><br><b>
|
||||
|
||||
For example, Python. It's multipurpose, easy-to-learn, continuously-evolving, and open-source. And it's very popular today
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -62,30 +60,18 @@ Statements are instructions executed by the interpreter like variable assignment
|
||||
|
||||
<details>
|
||||
<summary>What is Object Oriented Programming? Why is it important?</summary><br><b>
|
||||
|
||||
[educative.io](https://www.educative.io/blog/object-oriented-programming) "Object-Oriented Programming (OOP) is a programming paradigm in computer science that relies on the concept of classes and objects. It is used to structure a software program into simple, reusable pieces of code blueprints (usually called classes), which are used to create individual instances of objects."
|
||||
|
||||
OOP is the mainstream paradigm today. Most of the big services are wrote with OOP
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Explain Composition</summary><br><b>
|
||||
|
||||
Composition - ability to build a complex object from other objects
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is a compiler and interpreter?</summary><br><b>
|
||||
<summary>What is a compiler?</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
[bzfar.org](https://www.bzfar.org/publ/algorithms_programming/programming_languages/translators_compiler_vs_interpetator/42-1-0-50)
|
||||
|
||||
Compiler:
|
||||
|
||||
"A compiler is a translator used to convert high-level programming language to low-level programming language. It converts the whole program in one session and reports errors detected after the conversion. Compiler takes time to do its work as it translates high-level code to lower-level code all at once and then saves it to memory."
|
||||
|
||||
Interpreter:
|
||||
|
||||
"Just like a compiler, is a translator used to convert high-level programming language to low-level programming language. It converts the program line by line and reports errors detected at once, while doing the conversion. With this, it is easier to detect errors than in a compiler."
|
||||
<details>
|
||||
<summary>What is an interpreter?</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -98,52 +84,35 @@ SOLID design principles are about:
|
||||
|
||||
SOLID is:
|
||||
|
||||
* Single Responsibility - A class* should have one ~responsibility~ reason to change. It was edited by Robert Martin due to wrong understanding of principle
|
||||
* Open-Closed - A class should be open for extension, but closed for modification. What this practically means is that you should extend functionality by adding a new code and not by modifying it. Your system should be separated into components so it can be easily extended without breaking everything
|
||||
* Single Responsibility - A class should only have a single responsibility
|
||||
* Open-Closed - An entity should be open for extension, but closed for modification. What this practically means is that you should extend functionality by adding a new code and not by modifying it. Your system should be separated into components so it can be easily extended without breaking everything.
|
||||
* Liskov Substitution - Any derived class should be able to substitute the its parent without altering its corrections. Practically, every part of the code will get the expected result no matter which part is using it
|
||||
* Interface Segregation - A client should never depend on anything it doesn't uses. Big interfaces must be split to smaller interfaces if needed
|
||||
* Interface segregation - A client should never depend on anything it doesn't uses
|
||||
* Dependency Inversion - High level modules should depend on abstractions, not low level modules
|
||||
|
||||
*there also can be module, component, entity, etc. Depends on project structure and programming language
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is YAGNI? What is your opinion on it?</summary><br><b>
|
||||
|
||||
YAGNI - You aren't gonna need it. You must add functionality that will be used. No need to add functionality that is not directly needed
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is DRY? What is your opinion on it?</summary><br><b>
|
||||
|
||||
DRY - Don't repeat yourself. Actually it means that you shouldn't duplicate logic and use functions/classes instead. But this must be done smartly and pay attention to the domain logic. Same code lines don't always mean duplication
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What are the four pillars of object oriented programming?</summary><br><b>
|
||||
|
||||
* Abstraction - you don't need to know how this class implemented. You need to know what functionality does it provide (interface) and how to use it
|
||||
* Encapsulation - keep fields for class purposes private (or protected) and provide public methods if needed. We must keep the data and code safe within the class itself
|
||||
* Inheritance - gives ability to create class that shares some of attributes of existing classes
|
||||
* Polymorphism - same methods in different contexts can do different things. Method overloading and overriding are some forms of polymorphism
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Explain recursion</summary><br><b>
|
||||
|
||||
Recursion - process (or strategy), when function calls itself. It has recursive case and exit case. In recursive case we call function again, in exit case we finish function without calling it again. If we don't have exit case - function will work infinite, until memory overload or call stack limit
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Explain Inversion of Control (IoC)</summary><br><b>
|
||||
|
||||
Inversion of Control - design principle, used to achieve loose coupling. You must use some abstraction layer to access some functionality (similar to SOLID Dependency Inversion)
|
||||
<summary>Explain Inversion of Control</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Explain Dependency Injection (DI)</summary><br><b>
|
||||
|
||||
Dependency Injection - design pattern, used with IoC. Our object fields (dependencies) must be configurated by external objects
|
||||
<summary>Explain Dependency Injection</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
@ -160,29 +129,15 @@ True
|
||||
|
||||
<details>
|
||||
<summary>Explain big O notation</summary><br><b>
|
||||
|
||||
[habr.com](https://habr.com/ru/post/559518/) "We can use Big O notation to compare and search different solutions to find which solution is best. The best solution is one that consumes less amount of time and space. Generally, time and space are two parameters that determine the efficiency of the algorithm.
|
||||
|
||||
Big O Notation tells accurately how long an algorithm takes to run. It is a basic analysis of algorithm efficiency. It describes the execution time required. It depends on the size of input data that essentially passes in. Big O notation gives us algorithm complexity in terms of input size. For the large size of input data, the execution time will be slow as compared to the small size of input data. Big O notation is used to analyze space and time."
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is "Duck Typing"?</summary><br><b>
|
||||
|
||||
"When I see a bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck."
|
||||
|
||||
This is direction in programming, where we are checking properties of object, but not it's type
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Explain string interpolation</summary><br><b>
|
||||
|
||||
String interpolation - process of evaluating of string literal. For example (JS):</b>
|
||||
```js
|
||||
const messages = 5;
|
||||
console.log(`You have ${messages} new messages`); // You have 5 new messages
|
||||
```
|
||||
</details>
|
||||
</b></details>
|
||||
|
||||
##### Common algorithms
|
||||
|
||||
|
@ -3,7 +3,8 @@
|
||||
1. Improve the following query
|
||||
|
||||
```
|
||||
SELECT COUNT(purchased_at)
|
||||
SELECT count(*)
|
||||
FROM shawarma_purchases
|
||||
WHERE purchased_at BETWEEN '2017-01-01' AND '2017-12-31';
|
||||
WHERE
|
||||
YEAR(purchased_at) == '2017'
|
||||
```
|
||||
|
@ -3,71 +3,9 @@
|
||||
## SRE Questions
|
||||
|
||||
<details>
|
||||
<summary>What is an SLI (Service-Level Indicator)?</summary>
|
||||
<b>
|
||||
An SLI is a measurement used to assess the actual performance or reliability of a service. It serves as the basis for defining SLOs.
|
||||
|
||||
Examples:
|
||||
- Request latency
|
||||
- Processing throughput
|
||||
- Request failures per unit of time
|
||||
|
||||
Read more: [Google SRE Handbook](https://sre.google/sre-book/table-of-contents/)
|
||||
</b>
|
||||
</details></br>
|
||||
<summary>What is SLO (service-level objective)?</summary><br><b>
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What is an SLO (Service-Level Objective)?</summary>
|
||||
<b>
|
||||
|
||||
An SLO is a target value or range of values for a service level that is measured by an SLI
|
||||
|
||||
Example: 99% across 30 days for a specific collection of SLIs.
|
||||
|
||||
It's also worthy to note that the SLO also serves as a lower bound, indicating that there is no requirement to be more reliable than necessary because doing so can delay the rollout of new features.
|
||||
|
||||
Read more: [Google SRE Handbook](https://sre.google/sre-book/table-of-contents/)
|
||||
</b>
|
||||
</details><br>
|
||||
|
||||
<details>
|
||||
<summary>What is an SLA (Service-Level Agreement)?</summary>
|
||||
<b>
|
||||
|
||||
AN SLA is a formal agreement between a service provider and customers, specifying the expected service quality and consequences for not meeting it.
|
||||
|
||||
SRE doesn't typically get involved in constructing SLAs, because SLAs are closely tied to business and product decisions
|
||||
|
||||
Read more: [Google SRE Handbook](https://sre.google/sre-book/table-of-contents/)
|
||||
</b>
|
||||
</details><br>
|
||||
|
||||
<details>
|
||||
<summary>What is an Error Budget?</summary>
|
||||
<b>
|
||||
|
||||
An Error Budget represents the acceptable amount of downtime or errors a service can experience while still meeting its SLO.
|
||||
|
||||
An error budget is 1 minus the SLO of the service. A 99.9% SLO service has a 0.1% error budget.
|
||||
|
||||
If our service receives 1,000,000 requests in four weeks, a 99.9% availability SLO gives us a budget of 1,000 errors over that period.
|
||||
|
||||
The error budget is a mechanism for balancing innovation and stability. If the SRE cannot enforce the error budget, the whole system breaks down.
|
||||
|
||||
Read more: [Google SRE Handbook](https://sre.google/sre-book/table-of-contents/)
|
||||
</b>
|
||||
</details></br>
|
||||
|
||||
<details>
|
||||
<summary>What is Toil?</summary>
|
||||
<b>
|
||||
|
||||
Toil is the kind of work that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that scales linearly as a service grows.
|
||||
|
||||
If you can be automate a task, you should probably automate the task.
|
||||
|
||||
Automation significantly reduces Toil. Investing in automation results in valuable work with lasting impact, offering scalability potential with minimal adjustments as your system expands.
|
||||
|
||||
Read more: [Google SRE Handbook](https://sre.google/sre-book/table-of-contents/)
|
||||
</b>
|
||||
</details>
|
||||
<summary>What is SLA (service-level agreement)?</summary><br><b>
|
||||
</b></details>
|
||||
|
@ -179,7 +179,7 @@ Run `terraform apply`. That will apply the changes described in your .tf files.
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>How to cleanup Terraform resources? Why the user should be careful doing so?</summary><br><b>
|
||||
<summary>How to cleanup Terraform resources? Why the user shold be careful doing so?</summary><br><b>
|
||||
|
||||
`terraform destroy` will cleanup all the resources tracked by Terraform.
|
||||
|
||||
@ -359,7 +359,7 @@ False. You can specify any provider from any URL, not only those from hashicorp.
|
||||
#### Input Variables
|
||||
|
||||
<details>
|
||||
<summary>What are input variables good for in Terraform?</summary><br><b>
|
||||
<summary>What input variables are good for in Terraform?</summary><br><b>
|
||||
|
||||
Variables allow you define piece of data in one location instead of repeating the hardcoded value of it in multiple different locations. Then when you need to modify the variable's value, you do it in one location instead of changing each one of the hardcoded values.
|
||||
</b></details>
|
||||
@ -628,7 +628,7 @@ data "aws_vpc" "default {
|
||||
<details>
|
||||
<summary>How to get data out of a data source?</summary><br><b>
|
||||
|
||||
The general syntax is `data.<PROVIDER_AND_TYPE>.<NAME>.<ATTRIBUTE>`
|
||||
The general syntax is `data.<PROVIDER_AND_TYPE>.<NAME>.<ATTRBIUTE>`
|
||||
|
||||
So if you defined the following data source
|
||||
|
||||
@ -923,7 +923,7 @@ It starts with acquiring a state lock so others can't modify the state at the sa
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What would be the process of switching back from remote backend to local?</summary><br><b>
|
||||
<summary>What would be te process of switching back from remote backend to local?</summary><br><b>
|
||||
|
||||
1. You remove the backend code and perform `terraform init` to switch back to `local` backend
|
||||
2. You remove the resources that are the remote backend itself
|
||||
@ -940,7 +940,7 @@ One way to deal with it is using partial configurations in a completely separate
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Is there a way to obtain information from a remote backend/state using Terraform?</summary><br><b>
|
||||
<summary>Is there a way to obtain information from a remote backend/state usign Terraform?</summary><br><b>
|
||||
|
||||
Yes, using the concept of data sources. There is a data source for a remote state called "terraform_remote_state".
|
||||
|
||||
@ -965,9 +965,9 @@ True
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>Why workspaces might not be the best solution for managing states for different environments? like staging and production</summary><br><b>
|
||||
<summary>Why workspaces might not be the best solution for managing states for different environemnts? like staging and production</summary><br><b>
|
||||
|
||||
One reason is that all the workspaces are stored in one location (as in one backend) and usually you don't want to use the same access control and authentication for both staging and production for obvious reasons. Also working in workspaces is quite prone to human errors as you might accidentally think you are in one workspace, while you are working a completely different one.
|
||||
One reason is that all the workspaces are stored in one location (as in one backend) and usually you don't want to use the same access control and authentication for both staging and production for obvious reasons. Also working in workspaces is quite prone to human errors as you might accidently think you are in one workspace, while you are working a completely different one.
|
||||
</b></details>
|
||||
|
||||
|
||||
@ -1167,7 +1167,7 @@ for_each can applied only on collections like maps or sets so the list should be
|
||||
|
||||
|
||||
```
|
||||
resource "some_instance" "instance" {
|
||||
resouce "some_instance" "instance" {
|
||||
|
||||
dynamic "tag" {
|
||||
for_each = var.tags
|
||||
@ -1715,6 +1715,14 @@ It's not secure! you should never store credentials in plain text this way.
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>What can you do to NOT store provider credentials in Terraform configuration files in plain text?</summary><br><b>
|
||||
|
||||
1. Use environment variables
|
||||
2. Use password CLIs (like 1Password which is generic but there also specific provider options like aws-vault)
|
||||
|
||||
</b></details>
|
||||
|
||||
<details>
|
||||
<summary>How can you manage secrets/credentials in CI/CD?</summary><br><b>
|
||||
|
||||
@ -1829,7 +1837,7 @@ Suggest to use Terraform modules.
|
||||
<summary>When working with nested layout of many directories, it can make it cumbresome to run terraform commands in many different folders. How to deal with it?</summary><br><b>
|
||||
|
||||
There are multiple ways to deal with it:
|
||||
1. Write scripts that perform some commands recursively with different conditions
|
||||
1. Write scripts that perform some commands recurisvely with different conditions
|
||||
2. Use tools like Terragrunt where you commands like "run-all" that can run in parallel on multiple different paths
|
||||
|
||||
</b></details>
|
||||
|
Loading…
Reference in New Issue
Block a user