Fix typos (#411)

Found via `codespell -L caf,etcp,alle,aks`
This commit is contained in:
Kian-Meng Ang 2023-08-25 04:02:53 +08:00 committed by GitHub
parent bf95e8f81e
commit 4b6718938c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
29 changed files with 77 additions and 77 deletions

View File

@ -3,7 +3,7 @@ THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE COMMONS
BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE TO BE BOUND BY THE TERMS OF THIS LICENSE. TO THE EXTENT THIS LICENSE MAY BE CONSIDERED TO BE A CONTRACT, THE LICENSOR GRANTS YOU THE RIGHTS CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND CONDITIONS.
1. Definitions
"Adaptation" means a work based upon the Work, or upon the Work and other pre-existing works, such as a translation, adaptation, derivative work, arrangement of music or other alterations of a literary or artistic work, or phonogram or performance and includes cinematographic adaptations or any other form in which the Work may be recast, transformed, or adapted including in any form recognizably derived from the original, except that a work that constitutes a Collection will not be considered an Adaptation for the purpose of this License. For the avoidance of doubt, where the Work is a musical work, performance or phonogram, the synchronization of the Work in timed-relation with a moving image ("synching") will be considered an Adaptation for the purpose of this License.
"Adaptation" means a work based upon the Work, or upon the Work and other pre-existing works, such as a translation, adaptation, derivative work, arrangement of music or other alterations of a literary or artistic work, or phonogram or performance and includes cinematographic adaptations or any other form in which the Work may be recast, transformed, or adapted including in any form recognizably derived from the original, except that a work that constitutes a Collection will not be considered an Adaptation for the purpose of this License. For the avoidance of doubt, where the Work is a musical work, performance or phonogram, the synchronization of the Work in timed-relation with a moving image ("syncing") will be considered an Adaptation for the purpose of this License.
"Collection" means a collection of literary or artistic works, such as encyclopedias and anthologies, or performances, phonograms or broadcasts, or other works or subject matter other than works listed in Section 1(f) below, which, by reason of the selection and arrangement of their contents, constitute intellectual creations, in which the Work is included in its entirety in unmodified form along with one or more other contributions, each constituting separate and independent works in themselves, which together are assembled into a collective whole. A work that constitutes a Collection will not be considered an Adaptation (as defined above) for the purposes of this License.
"Distribute" means to make available to the public the original and copies of the Work through sale or other transfer of ownership.
"Licensor" means the individual, individuals, entity or entities that offer(s) the Work under the terms of this License.
@ -37,7 +37,7 @@ Voluntary License Schemes. The Licensor reserves the right to collect royalties,
Except as otherwise agreed in writing by the Licensor or as may be otherwise permitted by applicable law, if You Reproduce, Distribute or Publicly Perform the Work either by itself or as part of any Collections, You must not distort, mutilate, modify or take other derogatory action in relation to the Work which would be prejudicial to the Original Author's honor or reputation.
5. Representations, Warranties and Disclaimer
UNLESS OTHERWISE MUTUALLY AGREED BY THE PARTIES IN WRITING, LICENSOR OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY, FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS, WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.
UNLESS OTHERWISE MUTUALLY AGREED BY THE PARTIES IN WRITING, LICENSOR OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS, WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.
6. Limitation on Liability.
EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE LAW, IN NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES ARISING OUT OF THIS LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

View File

@ -798,10 +798,10 @@ https://www.minitool.com/lib/virtual-memory.html
Copy-on-write (COW) is a resource management concept, with the goal to reduce unnecessary copying of information. It is a concept which is implemented for instance within the POSIX fork syscall, which creates a duplicate process of the calling process.
The idea:
1. If resources are shared between 2 or more entities (for example shared memory segments between 2 processes) the resources don't need to be copied for every entity, but rather every entity has a READ operation access permission on the shared resource. (the shared segements are marked as read-only)
1. If resources are shared between 2 or more entities (for example shared memory segments between 2 processes) the resources don't need to be copied for every entity, but rather every entity has a READ operation access permission on the shared resource. (the shared segments are marked as read-only)
(Think of every entity having a pointer to the location of the shared resource which can be dereferenced to read its value)
2. If one entity would perform a WRITE operation on a shared resource a problem would arise since the resource also would be permanently changed for ALL other entities sharing it.
(Think of a process modifying some variables on the stack, or allocatingy some data dynamically on the heap, these changes to the shared resource would also apply for ALL other processes, this is definetly an undesirable behaviour)
(Think of a process modifying some variables on the stack, or allocatingy some data dynamically on the heap, these changes to the shared resource would also apply for ALL other processes, this is definitely an undesirable behaviour)
3. As a solution only if a WRITE operation is about to be performed on a shared resource, this resource gets COPIED first and then the changes are applied.
</b></details>
@ -1304,7 +1304,7 @@ Output: <code><br>
</code>
In `mod1` a is link, and when we're using `a[i]`, we're changing `s1` value to.
But in `mod2`, `append` creats new slice, and we're changing only `a` value, not `s2`.
But in `mod2`, `append` creates new slice, and we're changing only `a` value, not `s2`.
[Aritcle about arrays](https://golangbot.com/arrays-and-slices/),
[Blog post about `append`](https://blog.golang.org/slices)
@ -1362,7 +1362,7 @@ Output: 3
<details>
<summary>What are the advantages of MongoDB? Or in other words, why choosing MongoDB and not other implementation of NoSQL?</summary><br><b>
MongoDB advantages are as followings:
MongoDB advantages are as following:
- Schemaless
- Easy to scale-out
- No complex joins
@ -1403,7 +1403,7 @@ as key-value pair, document-oriented, etc.
<details>
<summary>What is better? Embedded documents or referenced?</summary><br><b>
* There is no definitive answer to which is better, it depends on the specific use case and requirements. Some explainations : Embedded documents provide atomic updates, while referenced documents allow for better normalization.
* There is no definitive answer to which is better, it depends on the specific use case and requirements. Some explanations : Embedded documents provide atomic updates, while referenced documents allow for better normalization.
</b></details>
<details>
@ -2171,7 +2171,7 @@ This is where data is stored and also where different processing takes place (e.
<details>
<summary>What is a master node?</summary><br><b>
Part of a master node responsibilites:
Part of a master node responsibilities:
* Track the status of all the nodes in the cluster
* Verify replicas are working and the data is available from every data node.
* No hot nodes (no data node that works much harder than other nodes)
@ -2183,7 +2183,7 @@ While there can be multiple master nodes in reality only of them is the elected
<summary>What is an ingest node?</summary><br><b>
A node which responsible for processing the data according to ingest pipeline. In case you don't need to use
logstash then this node can recieve data from beats and process it, similarly to how it can be processed
logstash then this node can receive data from beats and process it, similarly to how it can be processed
in Logstash.
</b></details>
@ -2239,7 +2239,7 @@ As in NoSQL a document is a JSON object which holds data on a unit in your app.
<details>
<summary>You check the health of your elasticsearch cluster and it's red. What does it mean? What can cause the status to be yellow instead of green?</summary><br><b>
Red means some data is unavailable in your cluster. Some shards of your indices are unassinged.
Red means some data is unavailable in your cluster. Some shards of your indices are unassigned.
There are some other states for the cluster.
Yellow means that you have unassigned shards in the cluster. You can be in this state if you have single node and your indices have replicas.
Green means that all shards in the cluster are assigned to nodes and your cluster is healthy.
@ -2600,7 +2600,7 @@ While automation focuses on a task level, Orchestration is the process of automa
</b></details>
<details>
<summary>What is a Debuggger and how it works?</summary><br><b>
<summary>What is a Debugger and how it works?</summary><br><b>
</b></details>
<details>
@ -2789,7 +2789,7 @@ False. It doesn't maintain state for incoming request.
It consists of:
* Request line - request type
* Headers - content info like length, enconding, etc.
* Headers - content info like length, encoding, etc.
* Body (not always included)
</b></details>
@ -3039,7 +3039,7 @@ CPU cache.
A memory leak is a programming error that occurs when a program fails to release memory that is no longer needed, causing the program to consume increasing amounts of memory over time.
The leaks can lead to a variety of problems, including system crashes, performance degradation, and instability. Usually occuring after failed maintenance on older systems and compatibility with new components over time.
The leaks can lead to a variety of problems, including system crashes, performance degradation, and instability. Usually occurring after failed maintenance on older systems and compatibility with new components over time.
</b></details>
<details>
@ -3097,7 +3097,7 @@ Cons:
<details>
<summary>Explain File Storage</summary><br><b>
- File Storage used for storing data in files, in a hierarchical sturcture
- File Storage used for storing data in files, in a hierarchical structure
- Some of the devices for file storage: hard drive, flash drive, cloud-based file storage
- Files usually organized in directories
</b></details>
@ -3275,7 +3275,7 @@ Given a text file, perform the following exercises
- "^\w+"
Bonus: extract the last word of each line
- "\w+(?=\W*$)" (in most cases, depends on line formating)
- "\w+(?=\W*$)" (in most cases, depends on line formatting)
</b></details>
<details>

View File

@ -24,7 +24,7 @@ SAAS
* IAAS
* PAAS
* SAAS</summary><br><b>
- IAAS - Infrastructure As A Service is a cloud computing service where a cloud provider rents out IT infrastructure such as compute, networking resources and strorage over the internet.<br>
- IAAS - Infrastructure As A Service is a cloud computing service where a cloud provider rents out IT infrastructure such as compute, networking resources and storage over the internet.<br>
- PAAS - Platform As A Service is a cloud hosting platform with an on-demand access to ready-to-use set of deployment, application management and DevOps tools.<br>
@ -432,7 +432,7 @@ False. Users can belong to multiple groups.
<summary>What are Roles?</summary><br><b>
A way for allowing a service of AWS to use another service of AWS. You assign roles to AWS resources.
For example, you can make use of a role which allows EC2 service to acesses s3 buckets (read and write).
For example, you can make use of a role which allows EC2 service to accesses s3 buckets (read and write).
</b></details>
<details>

View File

@ -55,7 +55,7 @@ False. Users can belong to multiple groups.
<summary>What are Roles?</summary><br><b>
A way for allowing a service of AWS to use another service of AWS. You assign roles to AWS resources.
For example, you can make use of a role which allows EC2 service to acesses s3 buckets (read and write).
For example, you can make use of a role which allows EC2 service to accesses s3 buckets (read and write).
</b></details>
<details>

View File

@ -112,7 +112,7 @@ Be familiar with the company you are interviewing at. Some ideas:
From my experience, this is not done by many candidates but it's one of the best ways to deep dive into topics like operating system, virtualization, scale, distributed systems, etc.
In most cases, you will do fine without reading books but for the AAA interviews (hardest level) you'll want to read some books and overall if you inspire to be better DevOps Engineer, books (also articles, blog posts) is a great way devleop yourself :)
In most cases, you will do fine without reading books but for the AAA interviews (hardest level) you'll want to read some books and overall if you inspire to be better DevOps Engineer, books (also articles, blog posts) is a great way develop yourself :)
### Consider starting in non-DevOps position

View File

@ -6,7 +6,7 @@ import os
def main():
"""Reads through README.md for question/answer pairs and adds them to a
list to randomly select from and quiz yourself.
Supports skipping quesitons with no documented answer with the -s flag
Supports skipping questions with no documented answer with the -s flag
"""
parser = optparse.OptionParser()
parser.add_option("-s", "--skip", action="store_true",

View File

@ -10,6 +10,6 @@ for file in ${MD_FILES[@]}; do
python ${PROJECT_DIR}/tests/syntax_lint.py ${file} > /dev/null
done
echo "- Syntax lint tests on MD files passed sucessfully"
echo "- Syntax lint tests on MD files passed successfully"
flake8 --max-line-length=100 . && echo "- PEP8 Passed"

View File

@ -352,7 +352,7 @@ A full list can be found at [PlayBook Variables](https://docs.ansible.com/ansib
* Host facts override play variables
* A role might include the following: vars, meta, and handlers
* Dynamic inventory is generated by extracting information from external sources
* Its a best practice to use indention of 2 spaces instead of 4
* Its a best practice to use indentation of 2 spaces instead of 4
* notify used to trigger handlers
* This “hosts: all:!controllers” means run only on controllers group hosts</summary><br><b>
</b></details>

View File

@ -134,7 +134,7 @@ The answer is yes, it's possible. You can configure ArgoCD to sync to desired st
<details>
<summary>How cluster disaster recovery becomes easier with ArgoCD?</summary><br><b>
Imagine you have a cluster in the cloud, in one of the regions. Something happens to that cluster and it's either crashes or simply no longer opertional.
Imagine you have a cluster in the cloud, in one of the regions. Something happens to that cluster and it's either crashes or simply no longer operational.
If you have all your cluster configuration in a GitOps repository, ArgoCD can be pointed to that repository while be configured to use a new cluster you've set up and apply that configuration so your cluster is again up and running with the same status as o
</b></details>
@ -335,7 +335,7 @@ There are multiple ways to deal with it:
<summary>What are some possible health statuses for an ArgoCD application?</summary><br><b>
* Healthy
* Missing: resource doesn't exist in the cluser
* Missing: resource doesn't exist in the cluster
* Suspended: resource is paused
* Progressing: resources isn't healthy but will become healthy or has the chance to become healthy
* Degraded: resource isn't healthy

View File

@ -14,4 +14,4 @@
## Solution
Click [here](soltuion.md) to view the solution
Click [here](solution.md) to view the solution

View File

@ -497,7 +497,7 @@ EBS
<details>
<summary>What happens to EBS volumes when the instance is terminated?</summary><br><b>
By deafult, the root volume is marked for deletion, while other volumes will still remain.<br>
By default, the root volume is marked for deletion, while other volumes will still remain.<br>
You can control what will happen to every volume upon termination.
</b></details>
@ -1258,7 +1258,7 @@ This not only provides enhanced security but also easier access for the user whe
- Uploading images to S3 and tagging them or inserting information on the images to a database
- Uploading videos to S3 and edit them or add subtitles/captions to them and store the result in S3
- Use SNS and/or SQS to trigger functions based on notifications or messages receieved from these services.
- Use SNS and/or SQS to trigger functions based on notifications or messages received from these services.
- Cron Jobs: Use Lambda together with CloudWatch events to schedule tasks/functions periodically.
</b></details>
@ -2594,7 +2594,7 @@ AWS Cognito
</b></details>
<details>
<summary>Which service is often reffered to as "used for decoupling applications"?</summary><br><b>
<summary>Which service is often referred to as "used for decoupling applications"?</summary><br><b>
AWS SQS. Since it's a messaging queue so it allows applications to switch from synchronous communication to asynchronous one.
</b></details>

View File

@ -9,7 +9,7 @@ Initialize a CDK project and set up files required to build a CDK project.
#### Initialize a CDK project
1. Install CDK on your machine by running `npm install -g aws-cdk`.
2. Create a new directory named `sample` for your project and run `cdk init app --language typescript` to initialize a CDK project. You can choose lanugage as csharp, fsharp, go, java, javascript, python or typescript.
2. Create a new directory named `sample` for your project and run `cdk init app --language typescript` to initialize a CDK project. You can choose language as csharp, fsharp, go, java, javascript, python or typescript.
3. You would see the following files created in your directory:
1. `cdk.json`, `tsconfig.json`, `package.json` - These are configuration files that are used to define some global settings for your CDK project.
2. `bin/sample.ts` - This is the entry point for your CDK project. This file is used to define the stack that you want to create.

View File

@ -7,7 +7,7 @@
| Set up a CI pipeline | CI | [Exercise](ci_for_open_source_project.md) | | |
| Deploy to Kubernetes | Deployment | [Exercise](deploy_to_kubernetes.md) | [Solution](solutions/deploy_to_kubernetes/README.md) | |
| Jenkins - Remove Jobs | Jenkins Scripts | [Exercise](remove_jobs.md) | [Solution](solutions/remove_jobs_solution.groovy) | |
| Jenkins - Remove Builds | Jenkins Sripts | [Exercise](remove_builds.md) | [Solution](solutions/remove_builds_solution.groovy) | |
| Jenkins - Remove Builds | Jenkins Scripts | [Exercise](remove_builds.md) | [Solution](solutions/remove_builds_solution.groovy) | |
### CI/CD Self Assessment
@ -546,7 +546,7 @@ For example, you might configure the workflow to trigger every time a changed is
</b></details>
<details>
<summary>True or False? In Github Actions, jobs are executed in parallel by deafult</summary><br><b>
<summary>True or False? In Github Actions, jobs are executed in parallel by default</summary><br><b>
True
</b></details>

View File

@ -1,5 +1,5 @@
## Deploy to Kubernetes
* Write a pipeline that will deploy an "hello world" web app to Kubernete
* Write a pipeline that will deploy an "hello world" web app to Kubernetes
* The CI/CD system (where the pipeline resides) and the Kubernetes cluster should be on separate systems
* The web app should be accessible remotely and only with HTTPS

View File

@ -6,7 +6,7 @@ Note: this exercise can be solved in various ways. The solution described here i
2. Deploy Kubernetes on a remote host (minikube can be an easy way to achieve it)
3. Create a simple web app or [page](html)
4. Create Kubernetes [resoruces](helloworld.yml) - Deployment, Service and Ingress (for HTTPS access)
4. Create Kubernetes [resources](helloworld.yml) - Deployment, Service and Ingress (for HTTPS access)
5. Create an [Ansible inventory](inventory) and insert the address of the Kubernetes cluster
6. Write [Ansible playbook](deploy.yml) to deploy the Kubernetes resources and also generate
7. Create a [pipeline](Jenkinsfile)

View File

@ -14,7 +14,7 @@
openssl_privatekey:
path: /etc/ssl/private/privkey.pem
- name: generate openssl certficate signing requests
- name: generate openssl certificate signing requests
openssl_csr:
path: /etc/ssl/csr/hello-world.app.csr
privatekey_path: /etc/ssl/private/privkey.pem

View File

@ -822,7 +822,7 @@ Through the use of namespaces and cgroups. Linux kernel has several types of nam
* namespaces: same as cgroups, namespaces isolate some of the system resources so it's available only for processes in the namespace. Differently from cgroups the focus with namespaces is on resources like mount points, IPC, network, ... and not about memory and CPU as in cgroups
* SElinux: the access control mechanism used to protect processes. Unfortunately to this date many users don't actually understand SElinux and some turn it off but nontheless, it's a very important security feature of the Linux kernel, used by container as well
* SElinux: the access control mechanism used to protect processes. Unfortunately to this date many users don't actually understand SElinux and some turn it off but nonetheless, it's a very important security feature of the Linux kernel, used by container as well
* Seccomp: similarly to SElinux, it's also a security mechanism, but its focus is on limiting the processes in regards to using system calls and file descriptors
</b></details>
@ -1224,7 +1224,7 @@ In rootless containers, user namespace appears to be running as root but it does
<details>
<summary>When running a container, usually a virtual ethernet device is created. To do so, root privileges are required. How is it then managed in rootless containers?</summary><br><b>
Networking is usually managed by Slirp in rootless containers. Slirp creates a tap device which is also the default route and it creates it in the network namepsace of the container. This device's file descriptor passed to the parent who runs it in the default namespace and the default namespace connected to the internet. This enables communication externally and internally.
Networking is usually managed by Slirp in rootless containers. Slirp creates a tap device which is also the default route and it creates it in the network namespace of the container. This device's file descriptor passed to the parent who runs it in the default namespace and the default namespace connected to the internet. This enables communication externally and internally.
</b></details>
<details>

View File

@ -270,7 +270,7 @@ We can understand web servers using two view points, which is:
## How communication between web server and web browsers established:
Whenever a browser needs a file that is hosted on a web server, the browser requests the page from the web server and the web server responds with that page.
This communcation between web browser and web server happens in the following ways:
This communication between web browser and web server happens in the following ways:
(1) User enters the domain name in the browser,and the browser then search for the IP address of the entered name. It can be done in 2 ways-
@ -455,7 +455,7 @@ A repository that doesn't holds the application source code, but the configurati
One might say we need more details as to what these configuration and infra files look like exactly and how complex the application and its CI/CD pipeline(s), but in general, most of the time you will want to put configuration and infra related files in their own separate repository and not in the repository of the application for multiple reasons:
* Every change submitted to the configuration, shouldn't trigger the CI/CD of the application, it should be testing out and applying the modified configuration, not the application itself
* When you mix application code with conifguration and infra related files
* When you mix application code with configuration and infra related files
</b></details>
#### SRE
@ -525,12 +525,12 @@ Read more about it [here](https://sre.google/sre-book/eliminating-toil/)
<details>
<summary>What is a postmortem ? </summary><br><b>
The postmortem is a process that should take place folowing an incident. Its purpose is to identify the root cause of an incident and the actions that should be taken to avoid this kind of incidents from hapenning again. </b></details>
The postmortem is a process that should take place following an incident. Its purpose is to identify the root cause of an incident and the actions that should be taken to avoid this kind of incidents from happening again. </b></details>
<details>
<summary>What is the core value often put forward when talking about postmortem?</summary><br><b>
Blamelessness.
Postmortems need to be blameless and this value should be remided at the begining of every postmortem. This is the best way to ensure that people are playing the game to find the root cause and not trying to hide their possible faults.</b></details>
Postmortems need to be blameless and this value should be remided at the beginning of every postmortem. This is the best way to ensure that people are playing the game to find the root cause and not trying to hide their possible faults.</b></details>

View File

@ -88,7 +88,7 @@ False. You can see [here](https://cloud.google.com/about/locations) which produc
Organization
Folder
Project
Resoruces
Resources
* Organizations - Company
* Folder - usually for departments, teams, products, etc.
@ -195,7 +195,7 @@ While labels don't affect the resources on which they are applied, network tags
<details>
<summary>Tell me what do you know about GCP networking</summary><br><b>
Virtual Private Cloud(VPC) network is a virtual version of physical network, implemented in Google's internal Network. VPC is a gloabal resource in GCP.
Virtual Private Cloud(VPC) network is a virtual version of physical network, implemented in Google's internal Network. VPC is a global resource in GCP.
Subnetworks(subnets) are regional resources, ie., subnets can be created withinin regions.
VPC are created in 2 modes,
@ -290,7 +290,7 @@ It is a set of tools to help developers write, run and debug GCP kubernetes base
It is a managed application platform for organisations like enterprises that require quick modernisation and certain levels
of consistency for their legacy applications in a hybrid or multicloud world. From this explanation the core ideas can be drawn from these statements;
* Managed -> the customer does not need to worry about the underlying software intergrations, they just enable the API.
* Managed -> the customer does not need to worry about the underlying software integrations, they just enable the API.
* application platform -> It consists of open source tools like K8s, Knative, Istio and Tekton
* Enterprises -> these are usually organisations with complex needs
* Consistency -> to have the same policies declaratively initiated to be run anywhere securely e.g on-prem, GCP or other-clouds (AWS or Azure)
@ -344,7 +344,7 @@ instances in the project.
* Node security - By default workloads are provisioned on Compute engine instances that use Google's Container Optimised OS. This operating system implements a locked-down firewall, limited user accounts with root disabled and a read-only filesystem. There is a further option to enable GKE Sandbox for stronger isolation in multi-tenant deployment scenarios.
* Network security - Within a created cluster VPC, Anthos GKE leverages a powerful software-defined network that enables simple Pod-to-Pod communications. Network policies allow locking down ingress and egress connections in a given namespace. Filtering can also be implemented to incoming load-balanced traffic for services that require external access, by supplying whitelisted CIDR IP ranges.
* Workload security - Running workloads run with limited privileges, default Docker AppArmor security policies are applied to all Kubernetes Pods. Workload identity for Anthos GKE aligns with the open source kubernetes service accounts with GCP service account permissions.
* Audit logging - Adminstrators are given a way to retain, query, process and alert on events of the deployed environments.
* Audit logging - Administrators are given a way to retain, query, process and alert on events of the deployed environments.
</b></details>
<details>
@ -399,7 +399,7 @@ It follows common modern software development practices which makes cluster conf
<details>
<summary>How does Anthos Service Mesh help?</summary><br><b>
Tool and technology integration that makes up Anthos service mesh delivers signficant operational benefits to Anthos environments, with minimal additional overhead such as follows:
Tool and technology integration that makes up Anthos service mesh delivers significant operational benefits to Anthos environments, with minimal additional overhead such as follows:
* Uniform observability - the data plane reports service to service communication back to the control plane generating a service dependency graph. Traffic inspection by the proxy inserts headers to facilitate distributed tracing, capturing and reporting service logs together with service-level metrics (i.e latency, errors, availability).
* Operational agility - fine-grained controls for managing the flow of inter-mesh (north-south) and intra-mesh (east-west) traffic are provided.

View File

@ -119,7 +119,7 @@ with `--dry-run` flag which will not actually create it, but it will test it and
</b></details>
<details>
<summary>How to check how many containers run in signle Pod?</summary><br><b>
<summary>How to check how many containers run in single Pod?</summary><br><b>
`k get po POD_NAME` and see the number under "READY" column.
@ -178,7 +178,7 @@ Go to that directory and remove the manifest/definition of the staic Pod (`rm <S
The container failed to run (due to different reasons) and Kubernetes tries to run the Pod again after some delay (= BackOff time).
Some reasons for it to fail:
- Misconfiguration - mispelling, non supported value, etc.
- Misconfiguration - misspelling, non supported value, etc.
- Resource not available - nodes are down, PV not mounted, etc.
Some ways to debug:
@ -849,7 +849,7 @@ Running `kubectl get events` you can see which scheduler was used.
</b></details>
<details>
<summary>You want to run a new Pod and you would like it to be scheduled by a custom schduler. How to achieve it?</summary><br><b>
<summary>You want to run a new Pod and you would like it to be scheduled by a custom scheduler. How to achieve it?</summary><br><b>
Add the following to the spec of the Pod:

View File

@ -1173,7 +1173,7 @@ Explanation as to who added them:
<details>
<summary>After creating a service that forwards incoming external traffic to the containerized application, how to make sure it works?</summary><br><b>
You can run `curl <SERIVCE IP>:<SERVICE PORT>` to examine the output.
You can run `curl <SERVICE IP>:<SERVICE PORT>` to examine the output.
</b></details>
<details>
@ -1316,7 +1316,7 @@ To run two instances of the applicaation?
`kubectl scale deployment <DEPLOYMENT_NAME> --replicas=2`
You can speciy any other number, given that your application knows how to scale.
You can specify any other number, given that your application knows how to scale.
</b></details>
### ReplicaSets
@ -1791,9 +1791,9 @@ False. When a namespace is deleted, the resources in that namespace are deleted
</b></details>
<details>
<summary>While namspaces do provide scope for resources, they are not isolating them</summary><br><b>
<summary>While namespaces do provide scope for resources, they are not isolating them</summary><br><b>
True. Try create two pods in two separate namspaces for example, and you'll see there is a connection between the two.
True. Try create two pods in two separate namespaces for example, and you'll see there is a connection between the two.
</b></details>
#### Namespaces - commands
@ -1858,7 +1858,7 @@ If the namespace doesn't exist already: `k create ns dev`
<details>
<summary>What kube-node-lease contains?</summary><br><b>
It holds information on hearbeats of nodes. Each node gets an object which holds information about its availability.
It holds information on heartbeats of nodes. Each node gets an object which holds information about its availability.
</b></details>
<details>
@ -2854,7 +2854,7 @@ Running `kubectl get events` you can see which scheduler was used.
</b></details>
<details>
<summary>You want to run a new Pod and you would like it to be scheduled by a custom schduler. How to achieve it?</summary><br><b>
<summary>You want to run a new Pod and you would like it to be scheduled by a custom scheduler. How to achieve it?</summary><br><b>
Add the following to the spec of the Pod:
@ -2912,7 +2912,7 @@ Exit and save. The pod should be in Running state now.
`NoSchedule`: prevents from resources to be scheduled on a certain node
`PreferNoSchedule`: will prefer to shcedule resources on other nodes before resorting to scheduling the resource on the chosen node (on which the taint was applied)
`NoExecute`: Appling "NoSchedule" will not evict already running Pods (or other resources) from the node as opposed to "NoExecute" which will evict any already running resource from the Node
`NoExecute`: Applying "NoSchedule" will not evict already running Pods (or other resources) from the node as opposed to "NoExecute" which will evict any already running resource from the Node
</b></details>
### Resource Limits
@ -3122,7 +3122,7 @@ Namespaces. See the following [namespaces question and answer](#namespaces-use-c
The container failed to run (due to different reasons) and Kubernetes tries to run the Pod again after some delay (= BackOff time).
Some reasons for it to fail:
- Misconfiguration - mispelling, non supported value, etc.
- Misconfiguration - misspelling, non supported value, etc.
- Resource not available - nodes are down, PV not mounted, etc.
Some ways to debug:

View File

@ -9,4 +9,4 @@
## After you complete the exercise
* Why did the "RESTARTS" count raised? - `because we killed the process and Kubernetes identified the container isn't running proprely so it performed restart to the Pod`
* Why did the "RESTARTS" count raised? - `because we killed the process and Kubernetes identified the container isn't running properly so it performed restart to the Pod`

View File

@ -51,7 +51,7 @@ kubectl label pod <POD_NAME> type-
5. List the Pods running. Are there more Pods running after removing the label? Why?
```
Yes, there is an additional Pod running because once the label (used as a matching selector) was removed, the Pod became independant meaning, it's not controlled by the ReplicaSet anymore and the ReplicaSet was missing replicas based on its definition so, it created a new Pod.
Yes, there is an additional Pod running because once the label (used as a matching selector) was removed, the Pod became independent meaning, it's not controlled by the ReplicaSet anymore and the ReplicaSet was missing replicas based on its definition so, it created a new Pod.
```
6. Verify the ReplicaSet indeed created a new Pod

View File

@ -1195,7 +1195,7 @@ You can also try closing/terminating the parent process. This will make the zomb
* Zombie Processes
</summary><br><b>
If you mention at any point ps command with arugments, be familiar with what these arguments does exactly.
If you mention at any point ps command with arguments, be familiar with what these arguments does exactly.
</b></details>
<details>
@ -1649,7 +1649,7 @@ There are 2 configuration files, which stores users information
<details>
<summary>Which file stores users passwords? Is it visible for everyone?</summary><br>
`/etc/shadow` file holds the passwords of the users in encryted format. NO, it is only visble to the `root` user
`/etc/shadow` file holds the passwords of the users in encryted format. NO, it is only visible to the `root` user
</details>
<details>
@ -1980,7 +1980,7 @@ Given the name of an executable and some arguments, it loads the code and static
<summary>True or False? A successful call to exec() never returns</summary><br><b>
True<br>
Since a succesful exec replace the current process, it can't return anything to the process that made the call.
Since a successful exec replace the current process, it can't return anything to the process that made the call.
</b></details>
<details>

View File

@ -141,7 +141,7 @@ Federation
<details>
<summary>What is OpenShift Federation?</summary><br><b>
Management and deployment of services and workloads accross multiple independent clusters from a single API
Management and deployment of services and workloads across multiple independent clusters from a single API
</b></details>
<details>
@ -190,7 +190,7 @@ Master node automatically restarts the pod unless it fails too often.
<details>
<summary>What happens when a pod fails too often?</summary><br><b>
It's marked as bad by the master node and temporarly not restarted anymore.
It's marked as bad by the master node and temporary not restarted anymore.
</b></details>
<details>

View File

@ -32,7 +32,7 @@ class Student:
"""Changes the department of the student object
Args:
new_deparment (str): Assigns the new deparment value to dept attr
new_deparment (str): Assigns the new department value to dept attr
"""
self.department = new_deparment

View File

@ -164,7 +164,7 @@ Multi-Factor Authentication (Also known as 2FA). Allows the user to present two
<details>
<summary>What is password salting? What attack does it help to deter?</summary><br><b>
Password salting is the processing of prepending or appending a series of characters to a user's password before hashing this new combined value. This value should be different for every single user but the same salt should be applied to the same user password everytime it is validated.
Password salting is the processing of prepending or appending a series of characters to a user's password before hashing this new combined value. This value should be different for every single user but the same salt should be applied to the same user password every time it is validated.
This ensures that users that have the same password will still have very different hash values stored in the password database. This process specifically helps deter rainbow table attacks since a new rainbow table would need to be computed for every single user in the database.
</b></details>
@ -340,7 +340,7 @@ The 'S' in HTTPS stands for 'secure'. HTTPS uses TLS to provide encryption of HT
[Red Hat](https://www.redhat.com/en/topics/security/what-is-cve#how-does-it-work) : "When someone refers to a CVE (Common Vulnerabilities and Exposures), they mean a security flaw that's been assigned a CVE ID number. They dont include technical data, or information about risks, impacts, and fixes." So CVE is just identified by an ID written with 8 digits. The CVE ID have the following format: CVE prefix + Year + Arbitrary Digits.
Anyone can submit a vulnerability, [Exploit Database](https://www.exploit-db.com/submit) explains how it works to submit.
Then CVSS stands for Common Vulnerability Scoring System, it attemps to assign severity scores to vulnerabilities, allowing to ordonnance and prioritize responses and ressources according to threat.
Then CVSS stands for Common Vulnerability Scoring System, it attempts to assign severity scores to vulnerabilities, allowing to ordonnance and prioritize responses and resources according to threat.
</b></details>
@ -395,7 +395,7 @@ Spectre is an attack method which allows a hacker to “read over the shoulder
<details>
<summary>What is CSRF? How to handle CSRF?</summary><br><b>
Cross-Site Request Forgery (CSRF) is an attack that makes the end user to initate a unwanted action on the web application in which the user has a authenticated session, the attacker may user an email and force the end user to click on the link and that then execute malicious actions. When an CSRF attack is successful it will compromise the end user data 
Cross-Site Request Forgery (CSRF) is an attack that makes the end user to initiate a unwanted action on the web application in which the user has a authenticated session, the attacker may user an email and force the end user to click on the link and that then execute malicious actions. When an CSRF attack is successful it will compromise the end user data 
You can use OWASP ZAP to analyze a "request", and if it appears that there no protection against cross-site request forgery when the Security Level is set to 0 (the value of csrf-token is SecurityIsDisabled.) One can use data from this request to prepare a CSRF attack by using OWASP ZAP
</b></details>

View File

@ -101,7 +101,7 @@ SOLID is:
* Single Responsibility - A class* should have one ~responsibility~ reason to change. It was edited by Robert Martin due to wrong understanding of principle
* Open-Closed - A class should be open for extension, but closed for modification. What this practically means is that you should extend functionality by adding a new code and not by modifying it. Your system should be separated into components so it can be easily extended without breaking everything
* Liskov Substitution - Any derived class should be able to substitute the its parent without altering its corrections. Practically, every part of the code will get the expected result no matter which part is using it
* Interface Segregation - A client should never depend on anything it doesn't uses. Big interfaces must be splitted to smaller interfaces if needed
* Interface Segregation - A client should never depend on anything it doesn't uses. Big interfaces must be split to smaller interfaces if needed
* Dependency Inversion - High level modules should depend on abstractions, not low level modules
*there also can be module, component, entity, etc. Depends on project structure and programming language
@ -143,7 +143,7 @@ Inversion of Control - design principle, used to achieve loose coupling. You mus
<details>
<summary>Explain Dependency Injection (DI)</summary><br><b>
Dependency Injection - deisgn pattern, used with IoC. Our object fields (dependecies) must be configurated by external objects
Dependency Injection - design pattern, used with IoC. Our object fields (dependencies) must be configurated by external objects
</b></details>
<details>

View File

@ -179,7 +179,7 @@ Run `terraform apply`. That will apply the changes described in your .tf files.
</b></details>
<details>
<summary>How to cleanup Terraform resources? Why the user shold be careful doing so?</summary><br><b>
<summary>How to cleanup Terraform resources? Why the user should be careful doing so?</summary><br><b>
`terraform destroy` will cleanup all the resources tracked by Terraform.
@ -628,7 +628,7 @@ data "aws_vpc" "default {
<details>
<summary>How to get data out of a data source?</summary><br><b>
The general syntax is `data.<PROVIDER_AND_TYPE>.<NAME>.<ATTRBIUTE>`
The general syntax is `data.<PROVIDER_AND_TYPE>.<NAME>.<ATTRIBUTE>`
So if you defined the following data source
@ -923,7 +923,7 @@ It starts with acquiring a state lock so others can't modify the state at the sa
</b></details>
<details>
<summary>What would be te process of switching back from remote backend to local?</summary><br><b>
<summary>What would be the process of switching back from remote backend to local?</summary><br><b>
1. You remove the backend code and perform `terraform init` to switch back to `local` backend
2. You remove the resources that are the remote backend itself
@ -940,7 +940,7 @@ One way to deal with it is using partial configurations in a completely separate
</b></details>
<details>
<summary>Is there a way to obtain information from a remote backend/state usign Terraform?</summary><br><b>
<summary>Is there a way to obtain information from a remote backend/state using Terraform?</summary><br><b>
Yes, using the concept of data sources. There is a data source for a remote state called "terraform_remote_state".
@ -965,9 +965,9 @@ True
</b></details>
<details>
<summary>Why workspaces might not be the best solution for managing states for different environemnts? like staging and production</summary><br><b>
<summary>Why workspaces might not be the best solution for managing states for different environments? like staging and production</summary><br><b>
One reason is that all the workspaces are stored in one location (as in one backend) and usually you don't want to use the same access control and authentication for both staging and production for obvious reasons. Also working in workspaces is quite prone to human errors as you might accidently think you are in one workspace, while you are working a completely different one.
One reason is that all the workspaces are stored in one location (as in one backend) and usually you don't want to use the same access control and authentication for both staging and production for obvious reasons. Also working in workspaces is quite prone to human errors as you might accidentally think you are in one workspace, while you are working a completely different one.
</b></details>
@ -1167,7 +1167,7 @@ for_each can applied only on collections like maps or sets so the list should be
```
resouce "some_instance" "instance" {
resource "some_instance" "instance" {
dynamic "tag" {
for_each = var.tags
@ -1829,7 +1829,7 @@ Suggest to use Terraform modules.
<summary>When working with nested layout of many directories, it can make it cumbresome to run terraform commands in many different folders. How to deal with it?</summary><br><b>
There are multiple ways to deal with it:
1. Write scripts that perform some commands recurisvely with different conditions
1. Write scripts that perform some commands recursively with different conditions
2. Use tools like Terragrunt where you commands like "run-all" that can run in parallel on multiple different paths
</b></details>