diff --git a/README.md b/README.md
index fee1abd..21d7419 100644
--- a/README.md
+++ b/README.md
@@ -20,10 +20,10 @@
- DevOps |
- CI/CD |
- Git |
- Ansible |
+ DevOps |
+ CI/CD |
+ Git |
+ Ansible |
Network |
Linux |
@@ -37,7 +37,7 @@
Prometheus |
- Cloud |
+ Cloud |
AWS |
Azure |
Google Cloud Platform |
@@ -69,7 +69,7 @@
HR |
- Terraform |
+ Terraform |
Mongo |
Puppet |
Distributed |
@@ -81,2303 +81,6 @@
-## DevOps
-
-
-What is DevOps?
-
-You can answer it by describing what DevOps means to you and/or rely on how companies define it. I've put here a couple of examples.
-
-Amazon:
-
-"DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market."
-
-Microsoft:
-
-"DevOps is the union of people, process, and products to enable continuous delivery of value to our end users. The contraction of “Dev” and “Ops” refers to replacing siloed Development and Operations to create multidisciplinary teams that now work together with shared and efficient practices and tools. Essential DevOps practices include agile planning, continuous integration, continuous delivery, and monitoring of applications."
-
-Red Hat:
-
-"DevOps describes approaches to speeding up the processes by which an idea (like a new software feature, a request for enhancement, or a bug fix) goes from development to deployment in a production environment where it can provide value to the user. These approaches require that development teams and operations teams communicate frequently and approach their work with empathy for their teammates. Scalability and flexible provisioning are also necessary. With DevOps, those that need power the most, get it—through self service and automation. Developers, usually coding in a standard development environment, work closely with IT operations to speed software builds, tests, and releases—without sacrificing reliability."
-
-Google:
-
-"...The organizational and cultural movement that aims to increase software delivery velocity, improve service reliability, and build shared ownership among software stakeholders"
-
-
-
-What are the benefits of DevOps? What can it help us to achieve?
-
- * Collaboration
- * Improved delivery
- * Security
- * Speed
- * Scale
- * Reliability
-
-
-
-What are the anti-patterns of DevOps?
-
-A couple of examples:
-
-* One person is in charge of specific tasks. For example there is only one person who is allowed to merge the code of everyone else into the repository.
-* Treating production differently from development environment. For example, not implementing security in development environment
-* Not allowing someone to push to production on Friday ;)
-
-
-
-How would you describe a successful DevOps engineer or a team?
-
-The answer can focus on:
-
-* Collaboration
-* Communication
-* Set up and improve workflows and processes (related to testing, delivery, ...)
-* Dealing with issues
-
-Things to think about:
-
-* What DevOps teams or engineers should NOT focus on or do?
-* Do DevOps teams or engineers have to be innovative or practice innovation as part of their role?
-
-
-#### Tooling
-
-
-What are you taking into consideration when choosing a tool/technology?
-
-A few ideas to think about:
-
- * mature/stable vs. cutting edge
- * community size
- * architecture aspects - agent vs. agentless, master vs. masterless, etc.
- * learning curve
-
-
-
-Can you describe which tool or platform you chose to use in some of the following areas and how?
-
- * CI/CD
- * Provisioning infrastructure
- * Configuration Management
- * Monitoring & alerting
- * Logging
- * Code review
- * Code coverage
- * Issue Tracking
- * Containers and Containers Orchestration
- * Tests
-
-This is a more practical version of the previous question where you might be asked additional specific questions on the technology you chose
-
- * CI/CD - Jenkins, Circle CI, Travis, Drone, Argo CD, Zuul
- * Provisioning infrastructure - Terraform, CloudFormation
- * Configuration Management - Ansible, Puppet, Chef
- * Monitoring & alerting - Prometheus, Nagios
- * Logging - Logstash, Graylog, Fluentd
- * Code review - Gerrit, Review Board
- * Code coverage - Cobertura, Clover, JaCoCo
- * Issue tracking - Jira, Bugzilla
- * Containers and Containers Orchestration - Docker, Podman, Kubernetes, Nomad
- * Tests - Robot, Serenity, Gauge
-
-
-
-A team member of yours, suggests to replace the current CI/CD platform used by the organization with a new one. How would you reply?
-
-Things to think about:
-
-* What we gain from doing so? Are there new features in the new platform? Does the new platform deals with some of the limitations presented in the current platform?
-* What this suggestion is based on? In other words, did he/she tried out the new platform? Was there extensive technical research?
-* What does the switch from one platform to another will require from the organization? For example, training users who use the platform? How much time the team has to invest in such move?
-
-
-#### Version Control
-
-
-What is Version Control?
-
-* Version control is the sytem of tracking and managing changes to software code.
-* It helps software teams to manage changes to source code over time.
-* Version control also helps developers move faster and allows software teams to preserve efficiency and agility as the team scales to include more developers.
-
-
-
-What is a commit?
-
-* In Git, a commit is a snapshot of your repo at a specific point in time.
-* The git commit command will save all staged changes, along with a brief description from the user, in a “commit” to the local repository.
-
-
-
-What is a merge?
-
-* Merging is Git's way of putting a forked history back together again. The git merge command lets you take the independent lines of development created by git branch and integrate them into a single branch.
-
-
-
-What is a merge conflict?
-
-* A merge conflict is an event that occurs when Git is unable to automatically resolve differences in code between two commits. When all the changes in the code occur on different lines or in different files, Git will successfully merge commits without your help.
-
-
-
-What best practices are you familiar with regarding version control?
-
-* Use a descriptive commit message
-* Make each commit a logical unit
-* Incorporate others' changes frequently
-* Share your changes frequently
-* Coordinate with your co-workers
-* Don't commit generated files
-
-
-
-Would you prefer a "configuration->deployment" model or "deployment->configuration"? Why?
-
-Both have advantages and disadvantages.
-With "configuration->deployment" model for example, where you build one image to be used by multiple deployments, there is less chance of deployments being different from one another, so it has a clear advantage of a consistent environment.
-
-
-
-Explain mutable vs. immutable infrastructure
-
-In mutable infrastructure paradigm, changes are applied on top of the existing infrastructure and over time
-the infrastructure builds up a history of changes. Ansible, Puppet and Chef are examples of tools which
-follow mutable infrastructure paradigm.
-
-In immutable infrastructure paradigm, every change is actually a new infrastructure. So a change
-to a server will result in a new server instead of updating it. Terraform is an example of technology
-which follows the immutable infrastructure paradigm.
-
-
-#### Software Distribution
-
-
-Explain "Software Distribution"
-
-Read [this](https://venam.nixers.net/blog/unix/2020/03/29/distro-pkgs.html) fantastic article on the topic.
-
-From the article: "Thus, software distribution is about the mechanism and the community that takes the burden and decisions to build an assemblage of coherent software that can be shipped."
-
-
-
-Why are there multiple software distributions? What differences they can have?
-
-Different distributions can focus on different things like: focus on different environments (server vs. mobile vs. desktop), support specific hardware, specialize in different domains (security, multimedia, ...), etc. Basically, different aspects of the software and what it supports, get different priority in each distribution.
-
-
-
-What is a Software Repository?
-
-Wikipedia: "A software repository, or “repo” for short, is a storage location for software packages. Often a table of contents is stored, as well as metadata."
-
-Read more [here](https://en.wikipedia.org/wiki/Software_repository)
-
-
-
-What ways are there to distribute software? What are the advantages and disadvantages of each method?
-
- * Source - Maintain build script within version control system so that user can build your app after cloning repository. Advantage: User can quickly checkout different versions of application. Disadvantage: requires build tools installed on users machine.
- * Archive - collect all your app files into one archive (e.g. tar) and deliver it to the user. Advantage: User gets everything he needs in one file. Disadvantage: Requires repeating the same procedure when updating, not good if there are a lot of dependencies.
- * Package - depends on the OS, you can use your OS package format (e.g. in RHEL/Fefodra it's RPM) to deliver your software with a way to install, uninstall and update it using the standard packager commands. Advantages: Package manager takes care of support for installation, uninstallation, updating and dependency management. Disadvantage: Requires managing package repository.
- * Images - Either VM or container images where your package is included with everything it needs in order to run successfully. Advantage: everything is preinstalled, it has high degree of environment isolation. Disadvantage: Requires knowledge of building and optimizing images.
-
-
-
-Are you familiar with "The Cathedral and the Bazaar models"? Explain each of the models
-
-* Cathedral - source code released when software is released
-* Bazaar - source code is always available publicly (e.g. Linux Kernel)
-
-
-
-What is caching? How does it works? Why is it important?
-
-Caching is fast access to frequently used resources which are computationally expensive or IO intensive and do not change often. There can be several layers of cache that can start from CPU caches to distributed cache systems. Common ones are in memory caching and distributed caching.
Caches are typically data structures that contains some data, such as a hashtable or dictionary. However, any data structure can provide caching capabilities, like set, sorted set, sorted dictionary etc. While, caching is used in many applications, they can create subtle bugs if not implemented correctly or used correctly. For example,cache invalidation, expiration or updating is usually quite challenging and hard.
-
-
-
-Explain stateless vs. stateful
-
-Stateless applications don't store any data in the host which makes it ideal for horizontal scaling and microservices.
-Stateful applications depend on the storage to save state and data, typically databases are stateful applications.
-
-
-
-What is Reliability? How does it fit DevOps?
-
-Reliability, when used in DevOps context, is the ability of a system to recover from infrastructure failure or disruption. Part of it is also being able to scale based on your organization or team demands.
-
-
-
-What "Availability" means? What means are there to track Availability of a service?
-
-
-
-Why 100% availability isn't a target? Why most companies or teams set it to be 99%.X?
-
-
-
-Describe the workflow of setting up some type of web server (Apache, IIS, Tomcat, ...)
-
-
-
-How a web server works?
-
-
-
-Explain "Open Source"
-
-
-
-Describe me the architecture of service/app/project/... you designed and/or implemented
-
-
-
-What types of tests are you familiar with?
-
-Styling, unit, functional, API, integration, smoke, scenario, ...
-
-You should be able to explain those that you mention.
-
-
-
-You need to install periodically a package (unless it's already exists) on different operating systems (Ubuntu, RHEL, ...). How would you do it?
-
-There are multiple ways to answer this question (there is no right and wrong here):
-
-* Simple cron job
-* Pipeline with configuration management technology (such Puppet, Ansible, Chef, etc.)
-...
-
-
-
-What is Chaos Engineering?
-
-Wikipedia: "Chaos engineering is the discipline of experimenting on a software system in production in order to build confidence in the system's capability to withstand turbulent and unexpected conditions"
-
-Read about Chaos Engineering [here](https://en.wikipedia.org/wiki/Chaos_engineering)
-
-
-
-What is "infrastructure as code"? What implementation of IAC are you familiar with?
-
-IAC (infrastructure as code) is a declerative approach of defining infrastructure or architecture of a system. Some implementations are ARM templates for Azure and Terraform that can work across multiple cloud providers.
-
-
-
-What benefits infrastructure-as-code has?
-
-- fully automated process of provisioning, modifying and deleting your infrastructure
-- version control for your infrastructure which allows you to quickly rollback to previous versions
-- validate infrastructure quality and stability with automated tests and code reviews
-- makes infrastructure tasks less repetitive
-
-
-
-How do you manage build artifacts?
-
-Build artifacts are usually stored in a repository. They can be used in release pipelines for deployment purposes. Usually there is retention period on the build artifacts.
-
-
-
-What Continuous Integration solution are you using/prefer and why?
-
-
-
-What deployment strategies are you familiar with or have used?
-
- There are several deployment strategies:
- * Rolling
- * Blue green deployment
- * Canary releases
- * Recreate strategy
-
-
-
-
-You joined a team where everyone developing one project and the practice is to run tests locally on their workstation and push it to the repository if the tests passed. What is the problem with the process as it is now and how to improve it?
-
-
-
-Explain test-driven development (TDD)
-
-
-
-Explain agile software development
-
-
-
-What do you think about the following sentence?: "implementing or practicing DevOps leads to more secure software"
-
-
-
-Do you know what is a "post-mortem meeting"? What is your opinion on that?
-
-
-
-What is a configuration drift? What problems is it causing?
-
-Configuration drift happens when in an environment of servers with the exact same configuration and software, a certain server
-or servers are being applied with updates or configuration which other servers don't get and over time these servers become
-slightly different than all others.
-
-This situation might lead to bugs which hard to identify and reproduce.
-
-
-
-How to deal with a configuration drift?
- Configuration drift can be avoided with desired state configuration (DSC) implementation. Desired state configuration can be a declarative file that defined how a system should be. There are tools to enforce desired state such a terraform or azure dsc. There are incramental or complete strategies.
-
-
-
-Explain Declarative and Procedural styles. The technologies you are familiar with (or using) are using procedural or declarative style?
-
-Declarative - You write code that specifies the desired end state
-Procedural - You describe the steps to get to the desired end state
-
-Declarative Tools - Terraform, Puppet, CloudFormation
-Procedural Tools - Ansible, Chef
-
-To better emphasize the difference, consider creating two virtual instances/servers.
-In declarative style, you would specify two servers and the tool will figure out how to reach that state.
-In procedural style, you need to specify the steps to reach the end state of two instances/servers - for example, create a loop and in each iteration of the loop create one instance (running the loop twice of course).
-
-
-
-Do you have experience with testing cross-projects changes? (aka cross-dependency)
-
-Note: cross-dependency is when you have two or more changes to separate projects and you would like to test them in mutual build instead of testing each change separately.
-
-
-
-Have you contributed to an open source project? Tell me about this experience
-
-
-
-What is Distributed Tracing?
-
-
-
-What is GitOps?
-
-GitLab: "GitOps is an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD tooling, and applies them to infrastructure automation".
-
-Read more [here](https://about.gitlab.com/topics/gitops)
-
-
-#### SRE
-
-
-What are the differences between SRE and DevOps?
-
-Google: "One could view DevOps as a generalization of several core SRE principles to a wider range of organizations, management structures, and personnel."
-
-Read more about it [here](https://sre.google/sre-book/introduction)
-
-
-
-What SRE team is responsible for?
-
-Google: "the SRE team is responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of their services"
-
-Read more about it [here](https://sre.google/sre-book/introduction)
-
-
-
-What is an error budget?
-
-Atlassian: "An error budget is the maximum amount of time that a technical system can fail without contractual consequences."
-
-Read more about it [here](https://www.atlassian.com/incident-management/kpis/error-budget)
-
-
-
-What do you think about the following statement: "100% is the only right availability target for a system"
-
-Wrong. No system can guarantee 100% availability as no system is safe from experiencing zero downtime.
-Many systems and services will fall somewhere between 99% and 100% uptime (or at least this is how most systems and services should be).
-
-
-
-What are MTTF (mean time to failure) and MTTR (mean time to repair)? What these metrics help us to evaluate?
-
- * MTTF (mean time to failure) other known as uptime, can be defined as how long the system runs before if fails.
- * MTTR (mean time to recover) on the other hand, is the amount of time it takes to repair a broken system.
- * MTBF (mean time between failures) is the amount of time between failures of the system.
-
-
-
-What is the role of monitoring in SRE?
-
-Google: "Monitoring is one of the primary means by which service owners keep track of a system’s health and availability"
-
-Read more about it [here](https://sre.google/sre-book/introduction)
-
-
-## CI/CD
-
-### CI/CD Exercises
-
-|Name|Topic|Objective & Instructions|Solution|Comments|
-|--------|--------|------|----|----|
-| Set up a CI pipeline | CI | [Exercise](exercises/cicd/ci_for_open_source_project.md) | | |
-| Deploy to Kubernetes | Deployment | [Exercise](exercises/devops/deploy_to_kubernetes.md) | [Solution](exercises/devops/solutions/deploy_to_kubernetes/README.md) | |
-| Jenkins - Remove Jobs | Jenkins Scripts | [Exercise](exercises/cicd/remove_jobs.md) | [Solution](exercises/cicd/solutions/remove_jobs_solution.groovy) | |
-| Jenkins - Remove Builds | Jenkins Sripts | [Exercise](exercises/cicd/remove_builds.md) | [Solution](exercises/cicd/solutions/remove_builds_solution.groovy) | |
-
-### CI/CD Self Assessment
-
-
-What is Continuous Integration?
-
-A development practice where developers integrate code into a shared repository frequently. It can range from a couple of changes every day or a week to a couple of changes in one hour in larger scales.
-
-Each piece of code (change/patch) is verified, to make the change is safe to merge. Today, it's a common practice to test the change using an automated build that makes sure the code can integrated. It can be one build which runs several tests in different levels (unit, functional, etc.) or several separate builds that all or some has to pass in order for the change to be merged into the repository.
-
-
-
-What is Continuous Deployment?
-
-A development strategy used by developers to release software automatically into production where any code commit must pass through an automated testing phase. Only when this is successful is the release considered production worthy. This eliminates any human interaction and should be implemented only after production-ready pipelines have been set with real-time monitoring and reporting of deployed assets. If any issues are detected in production it should be easy to rollback to previous working state.
-
-For more info please read [here](https://www.atlassian.com/continuous-delivery/continuous-deployment)
-
-
-
-Can you describe an example of a CI (and/or CD) process starting the moment a developer submitted a change/PR to a repository?
-
-There are many answers for such a question, as CI processes vary, depending on the technologies used and the type of the project to where the change was submitted.
-Such processes can include one or more of the following stages:
-
-* Compile
-* Build
-* Install
-* Configure
-* Update
-* Test
-
-An example of one possible answer:
-
-A developer submitted a pull request to a project. The PR (pull request) triggered two jobs (or one combined job). One job for running lint test on the change and the second job for building a package which includes the submitted change, and running multiple api/scenario tests using that package. Once all tests passed and the change was approved by a maintainer/core, it's merged/pushed to the repository. If some of the tests failed, the change will not be allowed to merged/pushed to the repository.
-
-A complete different answer or CI process, can describe how a developer pushes code to a repository, a workflow then triggered to build a container image and push it to the registry. Once in the registry, the k8s cluster is applied with the new changes.
-
-
-
-What is Continuous Delivery?
-
-A development strategy used to frequently deliver code to QA and Ops for testing. This entails having a staging area that has production like features where changes can only be accepted for production after a manual review. Because of this human entanglement there is usually a time lag between release and review making it slower and error prone as compared to continous deployment.
-
-For more info please read [here](https://www.atlassian.com/continuous-delivery/continuous-deployment)
-
-
-
-What is difference between Continuous Delivery and Continuous Deployment?
-
-Both encapsulate the same process of deploying the changes which were compiled and/or tested in the CI pipelines.
-The difference between the two is that Continuous Delivery isn't fully automated process as opposed to Continuous Deployment where every change that is tested in the process is eventually deployed to production. In continuous delivery someone is either approving the deployment process or the deployment process is based on constraints and conditions (like time constraint of deploying every week/month/...)
-
-
-
-What CI/CD best practices are you familiar with? Or what do you consider as CI/CD best practice?
-
-* Commit and test often.
-* Testing/Staging environment should be a clone of production environment.
-* Clean up your environments (e.g. your CI/CD pipelines may create a lot of resources. They should also take care of cleaning up everything they create)
-* The CI/CD pipelines should provide the same results when executed locally or remotely
-* Treat CI/CD as another application in your organization. Not as a glue code.
-* On demand environments instead of pre-allocated resources for CI/CD purposes
-* Stages/Steps/Tasks of pipelines should be shared between applications or microservices (don't re-invent common tasks like "cloning a project")
-
-
-
-You are given a pipeline and a pool with 3 workers: virtual machine, baremetal and a container. How will you decide on which one of them to run the pipeline?
-
-
-
-Where do you store CI/CD pipelines? Why?
-
-There are multiple approaches as to where to store the CI/CD pipeline definitions:
-
-1. App Repository - store them in the same repository of the application they are building or testing (perhaps the most popular one)
-2. Central Repository - store all organization's/project's CI/CD pipelines in one separate repository (perhaps the best approach when multiple teams test the same set of projects and they end up having many pipelines)
-3. CI repo for every app repo - you separate CI related code from app code but you don't put everything in one place (perhaps the worst option due to the maintenance)
-4. The platform where the CI/CD pipelines are being executed (e.g. Kubernetes Cluster in case of Tekton/OpenShift Pipelines).
-
-
-
-How do you perform plan capacity for your CI/CD resources? (e.g. servers, storage, etc.)
-
-
-
-How would you structure/implement CD for an application which depends on several other applications?
-
-
-
-How do you measure your CI/CD quality? Are there any metrics or KPIs you are using for measuring the quality?
-
-
-#### CI/CD - Jenkins
-
-
-What is Jenkins? What have you used it for?
-
-Jenkins is an open source automation tool written in Java with plugins built for Continuous Integration purpose. Jenkins is used to build and test your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.
-
-Jenkins integrates development life-cycle processes of all kinds, including build, document, test, package, stage, deploy, static analysis and much more.
-
-
-
-
-What are the advantages of Jenkins over its competitors? Can you compare it to one of the following systems?
-
- * Travis
- * Bamboo
- * Teamcity
- * CircleCI
-
-
-
-What are the limitations or disadvantages of Jenkins?
-
-This might be considered to be an opinionated answer:
-
-* Old fashioned dashboards with not many options to customize it
-* Containers readiness (this has improved with Jenkins X)
-* By itself, it doesn't have many features. On the other hand, there many plugins created by the community to expand its abilities
-* Managing Jenkins and its piplines as a code can be one hell of a nightmare
-
-
-
-Explain the following:
-
-- Job
-- Build
-- Plugin
-- Node or Worker
-- Executor
-- Job is an automation definition = what and where to execute once the user clicks on "build"
-- Build is a running instance of a job. You can have one or more builds at any given point of time (unless limited by confiugration)
-- A worker is the machine/instance on which the build is running. When a build starts, it "acquires" a worker out of a pool to run on it.
-- An executor is variable of the worker, defining how many builds can run on that worker in parallel. An executor value of 3 means, that 3 builds can run at any point on that executor (not necessarily of the same job. Any builds)
-
-
-
-What plugins have you used in Jenkins?
-
-
-
-Have you used Jenkins for CI or CD processes? Can you describe them?
-
-
-
-What type of jobs are there? Which types have you used?
-
-
-
-How did you report build results to users? What ways are there to report the results?
-
-You can report via:
- * Emails
- * Messaging apps
- * Dashboards
-
-Each has its own disadvantages and advantages. Emails for example, if sent too often, can be eventually disregarded or ignored.
-
-
-
-You need to run unit tests every time a change submitted to a given project. Describe in details how your pipeline would look like and what will be executed in each stage
-
-The pipelines will have multiple stages:
-
- * Clone the project
- * Install test dependencies (for example, if I need tox package to run the tests, I will install it in this stage)
- * Run unit tests
- * (Optional) report results (For example an email to the users)
- * Archive the relevant logs/files
-
-
-
-How to secure Jenkins?
-
- [Jenkins documentation](https://www.jenkins.io/doc/book/security/securing-jenkins/) provides some basic intro for securing your Jenkins server.
-
-
-
-Describe how do you add new nodes (agents) to Jenkins
-
-You can describe the UI way to add new nodes but better to explain how to do in a way that scales like a script or using dynamic source for nodes like one of the existing clouds.
-
-
-
-How to acquire multiple nodes for one specific build?
-
-
-
-Whenever a build fails, you would like to notify the team owning the job regarding the failure and provide failure reason. How would you do that?
-
-
-
-There are four teams in your organization. How to prioritize the builds of each team? So the jobs of team x will always run before team y for example
-
-
-
-If you are managing a dozen of jobs, you can probably use the Jenkins UI. But how do you manage the creation and deletion of hundreds of jobs every week/month?
-
-
-
-What are some of Jenkins limitations?
-
- * Testing cross-dependencies (changes from multiple projects together)
- * Starting builds from any stage (although Cloudbees implemented something called checkpoints)
-
-
-
-What is the different between a scripted pipeline to declarative pipeline? Which type are you using?
-
-
-
-How would you implement an option of a starting a build from a certain stage and not from the beginning?
-
-
-
-Do you have experience with developing a Jenkins plugin? Can you describe this experience?
-
-
-
-Have you written Jenkins scripts? If yes, what for and how they work?
-
-
-#### CI/CD - GitHub Actions
-
-
-What is a Workflow in GitHub Actions?
-
-A YAML file that defines the automation actions and instructions to execute upon a specific event.
-The file is placed in the repository itself.
-
-A Workflow can be anything - running tests, compiling code, building packages, ...
-
-
-
-What is a Runner in GitHub Actions?
-
-A workflow has to be executed somewhere. The environment where the workflow is executed is called Runner.
-A Runner can be an on-premise host or GitHub hoste
-
-
-
-What is a Job in GitHub Actions?
-
-A job is a series of steps which are executed on the same runner/environment.
-A workflow must include at least one job.
-
-
-
-What is an Action in GitHub Actions?
-
-An action is the smallest unit in a workflow. It includes the commands to execute as part of the job.
-
-
-
-In GitHub Actions workflow, what the 'on' attribute/directive is used for?
-
-Specify upon which events the workflow will be triggered.
-For example, you might configure the workflow to trigger every time a changed is pushed to the repository.
-
-
-
-True or False? In Github Actions, jobs are executed in parallel by deafult
-
-True
-
-
-
-How to create dependencies between jobs so one job runs after another?
-
-Using the "needs" attribute/directive.
-
-```
-jobs:
- job1:
- job2:
- needs: job1
-```
-
-In the above example, job1 must complete successfully before job2 runs
-
-
-
-How to add a Workflow to a repository?
-CLI:
-
-1. Create the directory `.github/workflows` in the repository
-2. Add a YAML file
-
-UI:
-
-1. In the repository page, click on "Actions"
-2. Choose workflow and click on "Set up this workflow"
-
-
-## Cloud
-
-
-What is Cloud Computing? What is a Cloud Provider?
-
-Cloud computing refers to the delivery of on-demand computing services
-over the internet on a pay-as-you-go basis.
-
-In simple words, Cloud computing is a service that lets you use any computing
-service such as a server, storage, networking, databases, and intelligence,
-right through your browser without owning anything. You can do anything you
-can think of unless it doesn’t require you to stay close to your hardware.
-
-Cloud service providers are companies that establish public clouds, manage private clouds, or offer on-demand cloud computing components (also known as cloud computing services) like Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service(SaaS). Cloud services can reduce business process costs when compared to on-premise IT.
-
-
-
-What are the advantages of cloud computing? Mention at least 3 advantages
-
-* Pay as you go: you are paying only for what you are using. No upfront payments and payment stops when resources are no longer used.
-* Scalable: resources are scaled down or up based on demand
-* High availability: resources and applications provide seamless experience, even when some services are down
-* Disaster recovery
-
-
-
-True or False? Cloud computing is a consumption-based model (users only pay for for resources they use)
-
-True
-
-
-
-What types of Cloud Computing services are there?
-
-IAAS - Infrastructure as a Service
-PAAS - Platform as a Service
-SAAS - Software as a Service
-
-
-
-Explain each of the following and give an example:
-
- * IAAS
- * PAAS
- * SAAS
- * IAAS - Users have control over complete Operating System and don't need to worry about the physical resources, which is managed by Cloud Service Provider.
- * PAAS - CLoud Service Provider takes care of Operating System, Middlewares and users only need to focus on our Data and Application.
- * SAAS - A cloud based method to provide software to users, software logics running on cloud, can be run on-premises or managed by Cloud Service Provider.
-
-
-
-What types of clouds (or cloud deployments) are there?
-
- * Public - Cloud services sharing computing resources among multiple customers
- * Private - Cloud services having computing resources limited to specific customer or organization, managed by third party or organizations itself
- * Hybrid - Combination of public and private clouds
-
-
-
-What are the differences between Cloud Providers and On-Premise solution?
-
-In cloud providers, someone else owns and manages the hardware, hire the relevant infrastructure teams and pays for real-estate (for both hardware and people). You can focus on your business.
-
-In On-Premise solution, it's quite the opposite. You need to take care of hardware, infrastructure teams and pay for everything which can be quite expensive. On the other hand it's tailored to your needs.
-
-
-
-What is Serverless Computing?
-
-The main idea behind serverless computing is that you don't need to manage the creation and configuration of server. All you need to focus on is splitting your app into multiple functions which will be triggered by some actions.
-
-It's important to note that:
-
-* Serverless Computing is still using servers. So saying there are no servers in serverless computing is completely wrong
-* Serverless Computing allows you to have a different paying model. You basically pay only when your functions are running and not when the VM or containers are running as in other payment models
-
-
-
-Can we replace any type of computing on servers with serverless?
-
-
-
-Is there a difference between managed service to SaaS or is it the same thing?
-
-
-
-What is auto scaling?
-
-AWS definition: "AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost"
-
-Read more about auto scaling [here](https://aws.amazon.com/autoscaling)
-
-
-
-True or False? Auto Scaling is about adding resources (such as instances) and not about removing resource
-
-False. Auto scaling adjusts capacity and this can mean removing some resources based on usage and performances.
-
-
-#### Cloud - Security
-
-
-How to secure instances in the cloud?
-
- * Instance should have minimal permissions needed. You don't want an instance-level incident to become an account-level incident
- * Instances should be accessed through load balancers or bastion hosts. In other words, they should be off the internet (in a private subnet behind a NAT).
- * Using latest OS images with your instances (or at least apply latest patches)
-
-
-## AWS
-
-### AWS Exercises
-
-#### AWS - IAM
-
-|Name|Topic|Objective & Instructions|Solution|Comments|
-|--------|--------|------|----|----|
-| Create a User | IAM | [Exercise](exercises/aws/create_user.md) | [Solution](exercises/aws/solutions/create_user.md) | |
-| Password Policy | IAM | [Exercise](exercises/aws/password_policy_and_mfa.md) | [Solution](exercises/aws/solutions/password_policy_and_mfa.md) | |
-| Create a role | IAM | [Exercise](exercises/aws/create_role.md) | [Solution](exercises/aws/solutions/create_role.md) | |
-| Credential Report | IAM | [Exercise](exercises/aws/credential_report.md) | [Solution](exercises/aws/solutions/credential_report.md) | |
-| Access Advisor | IAM | [Exercise](exercises/aws/access_advisor.md) | [Solution](exercises/aws/solutions/access_advisor.md) | |
-
-#### AWS - Lambda
-
-|Name|Topic|Objective & Instructions|Solution|Comments|
-|--------|--------|------|----|----|
-| Hello Function | Lambda | [Exercise](exercises/aws/hello_function.md) | [Solution](exercises/aws/solutions/hello_function.md) | |
-| URL Function | Lambda | [Exercise](exercises/aws/url_function.md) | [Solution](exercises/aws/solutions/url_function.md) | |
-
-### AWS Self Assessment
-
-#### AWS - Global Infrastructure
-
-
-Explain the following
-
- * Availability zone
- * Region
- * Edge location
-
-AWS regions are data centers hosted across different geographical locations worldwide.
-
-Within each region, there are multiple isolated locations known as Availability Zones. Each availability zone is one or more data-centers with redundant network and connectivity and power supply. Multiple availability zones ensure high availability in case one of them goes down.
-
-Edge locations are basically content delivery network which caches data and insures lower latency and faster delivery to the users in any location. They are located in major cities in the world.
-
-
-
-True or False? Each AWS region is designed to be completely isolated from the other AWS regions
-
-True.
-
-
-
-True or False? Each region has a minimum number of 1 availability zones and the maximum is 4
-
-False. The minimum is 2 while the maximum is 6.
-
-
-
-What considerations to take when choosing an AWS region for running a new application?
-
-* Services Availability: not all service (and all their features) are available in every region
-* Reduced latency: deploy application in a region that is close to customers
-* Compliance: some countries have more strict rules and requirements such as making sure the data stays within the borders of the country or the region. In that case, only specific region can be used for running the application
-* Pricing: the pricing might not be consistent across regions so, the price for the same service in different regions might be different.
-
-
-#### AWS - IAM
-
-
-What is IAM? What are some of its features?
-
-In short, it's used for managing users, groups, access policies & roles
-Full explanation can be found [here](https://aws.amazon.com/iam)
-
-
-
-True or False? IAM configuration is defined globally and not per region
-
-True
-
-
-
-True or False? When creating an AWS account, root account is created by default. This is the recommended account to use and share in your organization
-
-False. Instead of using the root account, you should be creating users and use them.
-
-
-
-True or False? Groups in AWS IAM, can contain only users and not other groups
-
-True
-
-
-
-True or False? Users in AWS IAM, can belong only to a single group
-
-False. Users can belong to multiple groups.
-
-
-
-What are some best practices regarding IAM in AWS?
-
-* Delete root account access keys and don't use root account regularly
-* Create IAM user for any physical user. Don't share users.
-* Apply "least privilege principle": give users only the permissions they need, nothing more than that.
-* Set up MFA and consider enforcing using it
-* Make use of groups to assign permissions ( user -> group -> permissions )
-
-
-
-What permissions does a new user have?
-
-Only a login access.
-
-
-
-True or False? If a user in AWS is using password for authenticating, he doesn't needs to enable MFA
-
-False(!). MFA is a great additional security layer to use for authentication.
-
-
-
-What ways are there to access AWS?
-
- * AWS Management Console
- * AWS CLI
- * AWS SDK
-
-
-
-What are Roles?
-
-[AWS docs](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html): "An IAM role is an IAM identity that you can create in your account that has specific permissions...it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS."
-For example, you can make use of a role which allows EC2 service to access s3 buckets (read and write).
-
-
-
-What are Policies?
-
-Policies documents used to give permissions as to what a user, group or role are able to do. Their format is JSON.
-
-
-
-A user is unable to access an s3 bucket. What might be the problem?
-
-There can be several reasons for that. One of them is lack of policy. To solve that, the admin has to attach the user with a policy what allows him to access the s3 bucket.
-
-
-
-What should you use to:
-
- - Grant access between two services/resources?
- - Grant user access to resources/services?
-
- * Role
- * Policy
-
-
-
-What statements AWS IAM policies support?
-
-* Sid: identifier of the statement (optional)
-* Effect: allow or deny access
-* Action: list of actions (to deny or allow)
-* Resource: a list of resources to which the actions are applied
-* Principal: role or account or user to which to apply the policy
-* Condition: conditions to determine when the policy is applied (optional)
-
-
-
-Explain the following policy:
-
-```
-{
- "Version": "2012-10-17",
- "Statement": [
- {
- "Effect:": "Allow",
- "Action": "*",
- "Resources": "*"
- }
- ]
-}
-```
-
-
-This policy permits to perform any action on any resource. It happens to be the "AdministratorAccess" policy.
-
-
-
-What security tools AWS IAM provides?
-
-* IAM Credentials Report: lists all the account users and the status of their credentials
-* IAM Access Advisor: Shows service permissions granted to a user and information on when he accessed these services the last time
-
-
-
-Which tool would you use to optimize user permissions by identifying which services he doesn't regularly (or at all) access?
-
-IAM Access Advisor
-
-
-#### AWS - Compute
-
-
-What is EC2?
-
-"a web service that provides secure, resizable compute capacity in the cloud".
-Read more [here](https://aws.amazon.com/ec2)
-
-
-
-True or False? EC2 is a regional service
-
-True. As opposed to IAM for example, which is a global service, EC2 is a regional service.
-
-
-
-What is AMI?
-
-Amazon Machine Images is "An Amazon Machine Image (AMI) provides the information required to launch an instance".
-Read more [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)
-
-
-
-What are the different source for AMIs?
-
-* Personal AMIs - AMIs you create
-* AWS Marketplace for AMIs - Paid AMIs usually with bundled with licensed software
-* Community AMIs - Free
-
-
-
-What is instance type?
-
-"the instance type that you specify determines the hardware of the host computer used for your instance"
-Read more about instance types [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html)
-
-
-
-True or False? The following are instance types available for a user in AWS:
-
- * Compute optimizied
- * Network optimizied
- * Web optimized
-
-False. From the above list only compute optimized is available.
-
-
-
-What is EBS?
-
-"provides block level storage volumes for use with EC2 instances. EBS volumes behave like raw, unformatted block devices."
-More on EBS [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html)
-
-
-
-What EC2 pricing models are there?
-
-On Demand - pay a fixed rate by the hour/second with no commitment. You can provision and terminate it at any given time.
-Reserved - you get capacity reservation, basically purchase an instance for a fixed time of period. The longer, the cheaper.
-Spot - Enables you to bid whatever price you want for instances or pay the spot price.
-Dedicated Hosts - physical EC2 server dedicated for your use.
-
-
-
-What are Security Groups?
-
-"A security group acts as a virtual firewall that controls the traffic for one or more instances"
-More on this subject [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html)
-
-
-
-How to migrate an instance to another availability zone?
-
-
-
-What can you attach to an EC2 instance in order to store data?
-
-EBS
-
-
-
-What EC2 RI types are there?
-
-Standard RI - most significant discount + suited for steady-state usage
-Convertible RI - discount + change attribute of RI + suited for steady-state usage
-Scheduled RI - launch within time windows you reserve
-
-Learn more about EC2 RI [here](https://aws.amazon.com/ec2/pricing/reserved-instances)
-
-
-
-You would like to invoke a function every time you enter a URL in the browser. Which service would you use for that?
-
-AWS Lambda
-
-
-#### AWS - Lambda
-
-
-Explain what is AWS Lambda
-
-AWS definition: "AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume."
-
-Read more on it [here](https://aws.amazon.com/lambda)
-
-
-
-True or False? In AWS Lambda, you are charged as long as a function exists, regardless of whether it's running or not
-
-False. Charges are being made when the code is executed.
-
-
-
-Which of the following set of languages Lambda supports?
-
-- R, Swift, Rust, Kotlin
-- Python, Ruby, Go
-- Python, Ruby, PHP
-
-
-- Python, Ruby, Go
-
-
-
-True or False? Basic lambda permissions allow you only to upload logs to Amazon CloudWatch Logs
-
-True
-
-
-#### AWS Containers
-
-
-What is Amazon ECS?
-
-Amazon definition: "Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service. Customers such as Duolingo, Samsung, GE, and Cook Pad use ECS to run their most sensitive and mission critical applications because of its security, reliability, and scalability."
-
-Learn more [here](https://aws.amazon.com/ecs)
-
-
-
-What is Amazon ECR?
-
-Amazon definition: "Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images."
-
-Learn more [here](https://aws.amazon.com/ecr)
-
-
-
-What is AWS Fargate?
-
-Amazon definition: "AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS)."
-
-Learn more [here](https://aws.amazon.com/fargate)
-
-
-#### AWS Storage
-
-
-Explain what is AWS S3?
-
-S3 stands for 3 S, Simple Storage Service.
-S3 is a object storage service which is fast, scalable and durable. S3 enables customers to upload, download or store any file or object that is up to 5 TB in size.
-
-More on S3 [here](https://aws.amazon.com/s3)
-
-
-
-What is a bucket?
-
-An S3 bucket is a resource which is similar to folders in a file system and allows storing objects, which consist of data.
-
-
-
-True or False? A bucket name must be globally unique
-
-True
-
-
-
-Explain folders and objects in regards to buckets
-
-* Folder - any sub folder in an s3 bucket
-* Object - The files which are stored in a bucket
-
-
-
-Explain the following:
-
- - Object Lifecycles
- - Object Sharing
- - Object Versioning
-
- * Object Lifecycles - Transfer objects between storage classes based on defined rules of time periods
- * Object Sharing - Share objects via a URL link
- * Object Versioning - Manage multiple versions of an object
-
-
-
-Explain Object Durability and Object Availability
-
-Object Durability: The percent over a one-year time period that a file will not be lost
-Object Availability: The percent over a one-year time period that a file will be accessible
-
-
-
-What is a storage class? What storage classes are there?
-
-Each object has a storage class assigned to, affecting its availability and durability. This also has effect on costs.
-Storage classes offered today:
- * Standard:
- * Used for general, all-purpose storage (mostly storage that needs to be accessed frequently)
- * The most expensive storage class
- * 11x9% durability
- * 2x9% availability
- * Default storage class
-
- * Standard-IA (Infrequent Access)
- * Long lived, infrequently accessed data but must be available the moment it's being accessed
- * 11x9% durability
- * 99.90% availability
-
- * One Zone-IA (Infrequent Access):
- * Long-lived, infrequently accessed, non-critical data
- * Less expensive than Standard and Standard-IA storage classes
- * 2x9% durability
- * 99.50% availability
-
- * Intelligent-Tiering:
- * Long-lived data with changing or unknown access patterns. Basically, In this class the data automatically moves to the class most suitable for you based on usage patterns
- * Price depends on the used class
- * 11x9% durability
- * 99.90% availability
-
- * Glacier: Archive data with retrieval time ranging from minutes to hours
- * Glacier Deep Archive: Archive data that rarely, if ever, needs to be accessed with retrieval times in hours
- * Both Glacier and Glacier Deep Archive are:
- * The most cheap storage classes
- * have 9x9% durability
-
-More on storage classes [here](https://aws.amazon.com/s3/storage-classes)
-
-
-
-
-A customer would like to move data which is rarely accessed from standard storage class to the most cheapest class there is. Which storage class should be used?
-
- * One Zone-IA
- * Glacier Deep Archive
- * Intelligent-Tiering
-
-Glacier Deep Archive
-
-
-
-What Glacier retrieval options are available for the user?
-
-Expedited, Standard and Bulk
-
-
-
-True or False? Each AWS account can store up to 500 PetaByte of data. Any additional storage will cost double
-
-False. Unlimited capacity.
-
-
-
-Explain what is Storage Gateway
-
-"AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage".
-More on Storage Gateway [here](https://aws.amazon.com/storagegateway)
-
-
-
-Explain the following Storage Gateway deployments types
-
- * File Gateway
- * Volume Gateway
- * Tape Gateway
-
-Explained in detail [here](https://aws.amazon.com/storagegateway/faqs)
-
-
-
-What is the difference between stored volumes and cached volumes?
-
-Stored Volumes - Data is located at customer's data center and periodically backed up to AWS
-Cached Volumes - Data is stored in AWS cloud and cached at customer's data center for quick access
-
-
-
-What is "Amazon S3 Transfer Acceleration"?
-
-AWS definition: "Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket"
-
-Learn more [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html)
-
-
-
-Explain data consistency
- S3 Data Consistency provides strong read-after-write consistency for PUT and DELETE requests of objects in the S3 bucket in all AWS Regions. S3 always return latest file version.
-
-
-
-Can you host dynamic websites on S3? What about static websites?
- No. S3 support only statis hosts. On a static website, individual webpages include static content. They might also contain client-side scripts. By contrast, a dynamic website relies on server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET. Amazon S3 does not support server-side scripting.
-
-
-
-What security measures have you taken in context of S3?
- * Enable versioning.
- * Don't make bucket public.
- * Enable encryption if it's disabled.
-
-
-
-What storage options are there for EC2 Instances?
-
-
-
-What is Amazon EFS?
-
-Amazon definition: "Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources."
-
-Learn more [here](https://aws.amazon.com/efs)
-
-
-
-What is AWS Snowmobile?
-
-"AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS."
-
-Learn more [here](https://aws.amazon.com/snowmobile)
-
-
-#### AWS Disaster Recovery
-
-
-In regards to disaster recovery, what is RTO and RPO?
-
-RTO - The maximum acceptable length of time that your application can be offline.
-
-RPO - The maximum acceptable length of time during which data might be lost from your application due to an incident.
-
-
-
-What types of disaster recovery techniques AWS supports?
-
-* The Cold Method - Periodically backups and sending the backups off-site
-* Pilot Light - Data is mirrored to an environment which is always running
-* Warm Standby - Running scaled down version of production environment
-* Multi-site - Duplicated environment that is always running
-
-
-
-Which disaster recovery option has the highest downtime and which has the lowest?
-
-Lowest - Multi-site
-Highest - The cold method
-
-
-#### AWS CloudFront
-
-
-Explain what is CloudFront
-
-AWS definition: "Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment."
-
-More on CloudFront [here](https://aws.amazon.com/cloudfront)
-
-
-
-Explain the following
-
- * Origin
- * Edge location
- * Distribution
-
-
-
-What delivery methods available for the user with CDN?
-
-
-
-True or False?. Objects are cached for the life of TTL
-
-True
-
-
-
-What is AWS Snowball?
-
-A transport solution which was designed for transferring large amounts of data (petabyte-scale) into and out the AWS cloud.
-
-
-##### AWS ELB
-
-
-What is ELB (Elastic Load Balancing)?
-
-AWS definition: "Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions."
-
-More on ELB [here](https://aws.amazon.com/elasticloadbalancing)
-
-
-
-What types of load balancers are supported in EC2 and what are they used for?
-
- * Application LB - layer 7 traffic
- * Network LB - ultra-high performances or static IP address (layer 4)
- * Classic LB - low costs, good for test or dev environments (retired by August 15, 2022)
- * Gateway LB - transparent network gateway and and distributes traffic such as firewalls, intrusion detection and prevention systems, and deep packet inspection systems. (layer 3)
-
-
-#### AWS Security
-
-
-What is the shared responsibility model? What AWS is responsible for and what the user is responsible for based on the shared responsibility model?
-
-The shared responsibility model defines what the customer is responsible for and what AWS is responsible for.
-
-More on the shared responsibility model [here](https://aws.amazon.com/compliance/shared-responsibility-model)
-
-
-
-True or False? Based on the shared responsibility model, Amazon is responsible for physical CPUs and security groups on instances
-
-False. It is responsible for Hardware in its sites but not for security groups which created and managed by the users.
-
-
-
-Explain "Shared Controls" in regards to the shared responsibility model
-
-AWS definition: "apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services"
-
-Learn more about it [here](https://aws.amazon.com/compliance/shared-responsibility-model)
-
-
-
-What is the AWS compliance program?
-
-
-
-How to secure instances in AWS?
-
- * Instance IAM roles should have minimal permissions needed. You don't want an instance-level incident to become an account-level incident
- * Use "AWS System Manager Session Manager" for SSH
- * Using latest OS images with your instances
-
-
-
-What is AWS Artifact?
-
-AWS definition: "AWS Artifact is your go-to, central resource for compliance-related information that matters to you. It provides on-demand access to AWS’ security and compliance reports and select online agreements."
-
-Read more about it [here](https://aws.amazon.com/artifact)
-
-
-
-What is AWS Inspector?
-
-AWS definition: "Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.""
-
-Learn more [here](https://aws.amazon.com/inspector)
-
-
-
-What is AWS Guarduty?
-AWS definition: "Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your Amazon Web Services accounts, workloads, and data stored in Amazon S3"
-Monitor VPC Flow lows, DNS logs, CloudTrail S3 events and CloudTrail Mgmt events.
-
-
-
-What is AWS Shield?
-
-AWS definition: "AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS."
-
-
-
-What is AWS WAF? Give an example of how it can used and describe what resources or services you can use it with
-
-
-
-What AWS VPN is used for?
-
-
-
-What is the difference between Site-to-Site VPN and Client VPN?
-
-
-
-What is AWS CloudHSM?
-
-Amazon definition: "AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud."
-
-Learn more [here](https://aws.amazon.com/cloudhsm)
-
-
-
-True or False? AWS Inspector can perform both network and host assessments
-
-True
-
-
-
-What is AWS Key Management Service (KMS)?
-
-AWS definition: "KMS makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications."
-More on KMS [here](https://aws.amazon.com/kms)
-
-
-
-What is AWS Acceptable Use Policy?
-
-It describes prohibited uses of the web services offered by AWS.
-More on AWS Acceptable Use Policy [here](https://aws.amazon.com/aup)
-
-
-
-True or False? A user is not allowed to perform penetration testing on any of the AWS services
-
-False. On some services, like EC2, CloudFront and RDS, penetration testing is allowed.
-
-
-
-True or False? DDoS attack is an example of allowed penetration testing activity
-
-False.
-
-
-
-True or False? AWS Access Key is a type of MFA device used for AWS resources protection
-
-False. Security key is an example of an MFA device.
-
-
-
-What is Amazon Cognito?
-
-Amazon definition: "Amazon Cognito handles user authentication and authorization for your web and mobile apps."
-
-Learn more [here](https://docs.aws.amazon.com/cognito/index.html)
-
-
-
-What is AWS ACM?
-
-Amazon definition: "AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources."
-
-Learn more [here](https://aws.amazon.com/certificate-manager)
-
-
-#### AWS Databases
-
-
-What is AWS RDS?
-
-
-
-What is AWS DynamoDB?
-
-
-
-Explain "Point-in-Time Recovery" feature in DynamoDB
-
-Amazon definition: "You can create on-demand backups of your Amazon DynamoDB tables, or you can enable continuous backups using point-in-time recovery. For more information about on-demand backups, see On-Demand Backup and Restore for DynamoDB."
-
-Learn more [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery.html)
-
-
-
-Explain "Global Tables" in DynamoDB
-
-Amazon definition: "A global table is a collection of one or more replica tables, all owned by a single AWS account."
-
-Learn more [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/V2globaltables_HowItWorks.html)
-
-
-
-What is DynamoDB Accelerator?
-
-Amazon definition: "Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds..."
-
-Learn more [here](https://aws.amazon.com/dynamodb/dax)
-
-
-
-What is AWS Redshift and how is it different than RDS?
-
-cloud data warehouse
-
-
-
-What do you if you suspect AWS Redshift performs slowly?
-
-* You can confirm your suspicion by going to AWS Redshift console and see running queries graph. This should tell you if there are any long-running queries.
-* If confirmed, you can query for running queries and cancel the irrelevant queries
-* Check for connection leaks (query for running connections and include their IP)
-* Check for table locks and kill irrelevant locking sessions
-
-
-
-What is AWS ElastiCache? For what cases is it used?
-
-Amazon Elasticache is a fully managed Redis or Memcached in-memory data store.
-It's great for use cases like two-tier web applications where the most frequently accesses data is stored in ElastiCache so response time is optimal.
-
-
-
-What is Amazon Aurora
-
-A MySQL & Postgresql based relational database. Also, the default database proposed for the user when using RDS for creating a database.
-Great for use cases like two-tier web applications that has a MySQL or Postgresql database layer and you need automated backups for your application.
-
-
-
-What is Amazon DocumentDB?
-
-Amazon definition: "Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data."
-
-Learn more [here](https://aws.amazon.com/documentdb)
-
-
-
-What "AWS Database Migration Service" is used for?
-
-
-
-What type of storage is used by Amazon RDS?
-
-EBS
-
-
-
-Explain Amazon RDS Read Replicas
-
-AWS definition: "Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads."
-Read more about [here](https://aws.amazon.com/rds/features/read-replicas)
-
-
-#### AWS Networking
-
-
-What is VPC?
-
-"A logically isolated section of the AWS cloud where you can launch AWS resources in a virtual network that you define"
-Read more about it [here](https://aws.amazon.com/vpc).
-
-
-
-True or False? VPC spans multiple regions
-
-False
-
-
-
-True or False? Subnets belong to the same VPC, can be in different availability zones
-
-True. Just to clarify, a single subnet resides entirely in one AZ.
-
-
-
-What is an Internet Gateway?
-
-"component that allows communication between instances in your VPC and the internet" (AWS docs).
-Read more about it [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html)
-
-
-
-True or False? NACL allow or deny traffic on the subnet level
-
-True
-
-
-
-What is VPC peering?
-
-[docs.aws](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html): "A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses."
-
-
-
-True or False? Multiple Internet Gateways can be attached to one VPC
-
-False. Only one internet gateway can be attached to a single VPC.
-
-
-
-What is an Elastic IP address?
-An Elastic IP address is a reserved public IP address that you can assign to any EC2 instance in a particular region, until you choose to release it.
-When you associate an Elastic IP address with an EC2 instance, it replaces the default public IP address. If an external hostname was allocated to the instance from your launch settings, it will also replace this hostname; otherwise, it will create one for the instance. The Elastic IP address remains in place through events that normally cause the address to change, such as stopping or restarting the instance.
-
-
-
-True or False? Route Tables used to allow or deny traffic from the internet to AWS instances
-
-False.
-
-
-
-Explain Security Groups and Network ACLs
-
-* NACL - security layer on the subnet level.
-* Security Group - security layer on the instance level.
-
-Read more about it [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html) and [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)
-
-
-
-What is AWS Direct Connect?
-
-Allows you to connect your corporate network to AWS network.
-
-
-#### AWS - Identify the service or tool
-
-
-What would you use for automating code/software deployments?
-
-AWS CodeDeploy
-
-
-
-What would you use for easily creating similar AWS environments/resources for different customers?
-
-CloudFormation
-
-
-
-Using which service, can you add user sign-up, sign-in and access control to mobile and web apps?
-
-Cognito
-
-
-
-Which service would you use for building a website or web application?
-
-Lightsail
-
-
-
-Which tool would you use for choosing between Reserved instances or On-Demand instances?
-
-Cost Explorer
-
-
-
-What would you use to check how many unassociated Elastic IP address you have?
-
-Trusted Advisor
-
-
-
-Which service allows you to transfer large amounts (Petabytes) of data in and out of the AWS cloud?
-
-AWS Snowball
-
-
-
-Which service would you use if you need a data warehouse?
-
-AWS RedShift
-
-
-
-Which service provides a virtual network dedicated to your AWS account?
-
-VPC
-
-
-
-What you would use for having automated backups for an application that has MySQL database layer?
-
-Amazon Aurora
-
-
-
-What would you use to migrate on-premise database to AWS?
-
-AWS Database Migration Service (DMS)
-
-
-
-What would you use to check why certain EC2 instances were terminated?
-
-AWS CloudTrail
-
-
-
-What would you use for SQL database?
-
-AWS RDS
-
-
-
-What would you use for NoSQL database?
-
-AWS DynamoDB
-
-
-
-What would you use for adding image and video analysis to your application?
-
-AWS Rekognition
-
-
-
-Which service would you use for debugging and improving performances issues with your applications?
-
-AWS X-Ray
-
-
-
-Which service is used for sending notifications?
-
-SNS
-
-
-
-What would you use for running SQL queries interactively on S3?
-
-AWS Athena
-
-
-
-What would you use for preparing and combining data for analytics or ML?
-
-AWS Glue
-
-
-
-Which service would you use for monitoring malicious activity and unauthorized behavior in regards to AWS accounts and workloads?
-
-Amazon GuardDuty
-
-
-
-Which service would you use for centrally manage billing, control access, compliance, and security across multiple AWS accounts?
-
-AWS Organizations
-
-
-
-Which service would you use for web application protection?
-
-AWS WAF
-
-
-
-You would like to monitor some of your resources in the different services. Which service would you use for that?
-
-CloudWatch
-
-
-
-Which service would you use for performing security assessment?
-
-AWS Inspector
-
-
-
-Which service would you use for creating DNS record?
-
-Route 53
-
-
-
-What would you use if you need a fully managed document database?
-
-Amazon DocumentDB
-
-
-
-Which service would you use to add access control (or sign-up, sign-in forms) to your web/mobile apps?
-
-AWS Cognito
-
-
-
-Which service would you use if you need messaging queue?
-
-Simple Queue Service (SQS)
-
-
-
-Which service would you use if you need managed DDOS protection?
-
-AWS Shield
-
-
-
-Which service would you use if you need store frequently used data for low latency access?
-
-ElastiCache
-
-
-
-What would you use to transfer files over long distances between a client and an S3 bucket?
-
-Amazon S3 Transfer Acceleration
-
-
-
-Which service would you use for distributing incoming requests across multiple?
-
-Route 53
-
-
-
-Which services are involved in getting a custom string (based on the input) when inserting a URL in the browser?
-
-Lambda - to define a function that gets an input and returns a certain string
-API Gateway - to define the URL trigger (= when you insert the URL, the function is invoked).
-
-
-
-Which service would you use for data or events streaming?
-
-Kinesis
-
-
-#### AWS DNS
-
-
-What is Route 53?
-
-"Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service..."
-Some of Route 53 features:
- * Register domain
- * DNS service - domain name translations
- * Health checks - verify your app is available
-
-More on Route 53 [here](https://aws.amazon.com/route53)
-
-
-#### AWS Monitoring & Logging
-
-
-What is AWS CloudWatch?
-
-AWS definition: "Amazon CloudWatch is a monitoring and observability service..."
-
-More on CloudWatch [here](https://aws.amazon.com/cloudwatch)
-
-
-
-What is AWS CloudTrail?
-
-AWS definition: "AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account."
-
-Read more on CloudTrail [here](https://aws.amazon.com/cloudtrail)
-
-
-
-What is Simply Notification Service?
-
-AWS definition: "a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications."
-
-Read more about it [here](https://aws.amazon.com/sns)
-
-
-
-Explain the following in regards to SNS:
-
- - Topics
- - Subscribers
- - Publishers
-
- * Topics - used for grouping multiple endpoints
- * Subscribers - the endpoints where topics send messages to
- * Publishers - the provider of the message (event, person, ...)
-
-
-#### AWS Billing & Support
-
-
-What is AWS Organizations?
-
-AWS definition: "AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS."
-More on Organizations [here](https://aws.amazon.com/organizations)
-
-
-
-What are Service Control Policies and to what service they belong?
-
-AWS organizations service and the definition by Amazon: "SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines."
-
-Learn more [here](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html)
-
-
-
-Explain AWS pricing model
-
-It mainly works on "pay-as-you-go" meaning you pay only for what are using and when you are using it.
-In s3 you pay for 1. How much data you are storing 2. Making requests (PUT, POST, ...)
-In EC2 it's based on the purchasing option (on-demand, spot, ...), instance type, AMI type and the region used.
-
-More on AWS pricing model [here](https://aws.amazon.com/pricing)
-
-
-
-How one should estimate AWS costs when for example comparing to on-premise solutions?
-
-* TCO calculator
-* AWS simple calculator
-* Cost Explorer
-
-
-
-What basic support in AWS includes?
-
-* 24x7 customer service
-* Trusted Advisor
-* AWS personal Health Dashoard
-
-
-
-How are EC2 instances billed?
-
-
-
-What AWS Pricing Calculator is used for?
-
-
-
-What is Amazon Connect?
-
-Amazon definition: "Amazon Connect is an easy to use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost."
-
-Learn more [here](https://aws.amazon.com/connect)
-
-
-
-What are "APN Consulting Partners"?
-
-Amazon definition: "APN Consulting Partners are professional services firms that help customers of all types and sizes design, architect, build, migrate, and manage their workloads and applications on AWS, accelerating their journey to the cloud."
-
-Learn more [here](https://aws.amazon.com/partners/consulting)
-
-
-
-Which of the following are AWS accounts types (and are sorted by order)?
-
- - Basic, Developer, Business, Enterprise
- - Newbie, Intermediate, Pro, Enterprise
- - Developer, Basic, Business, Enterprise
- - Beginner, Pro, Intermediate Enterprise
-
-
- - Basic, Developer, Business, Enterprise
-
-
-
-True or False? Region is a factor when it comes to EC2 costs/pricing
-
-True. You pay differently based on the chosen region.
-
-
-
-What is "AWS Infrastructure Event Management"?
-
-AWS Definition: "AWS Infrastructure Event Management is a structured program available to Enterprise Support customers (and Business Support customers for an additional fee) that helps you plan for large-scale events such as product or application launches, infrastructure migrations, and marketing events."
-
-
-#### AWS Automation
-
-
-What is AWS CodeDeploy?
-
-Amazon definition: "AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers."
-
-Learn more [here](https://aws.amazon.com/codedeploy)
-
-
-
-Explain what is CloudFormation
-
-
-#### AWS - Misc
-
-
-Which AWS service you have experience with that you think is not very common?
-
-
-
-What is AWS CloudSearch?
-
-
-
-What is AWS Lightsail?
-
-AWS definition: "Lightsail is an easy-to-use cloud platform that offers you everything needed to build an application or website, plus a cost-effective, monthly plan."
-
-
-
-What is AWS Rekognition?
-
-AWS definition: "Amazon Rekognition makes it easy to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise to use."
-
-Learn more [here](https://aws.amazon.com/rekognition)
-
-
-
-What AWS Resource Groups used for?
-
-Amazon definition: "You can use resource groups to organize your AWS resources. Resource groups make it easier to manage and automate tasks on large numbers of resources at one time. "
-
-Learn more [here](https://docs.aws.amazon.com/ARG/latest/userguide/welcome.html)
-
-
-
-What is AWS Global Accelerator?
-
-Amazon definition: "AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users..."
-
-Learn more [here](https://aws.amazon.com/global-accelerator)
-
-
-
-What is AWS Config?
-
-Amazon definition: "AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources."
-
-Learn more [here](https://aws.amazon.com/config)
-
-
-
-What is AWS X-Ray?
-
-AWS definition: "AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture."
-Learn more [here](https://aws.amazon.com/xray)
-
-
-
-What is AWS OpsWorks?
-
-Amazon definition: "AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet."
-
-Learn more about it [here](https://aws.amazon.com/opsworks)
-
-
-
-What is AWS Athena?
-
-"Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL."
-
-Learn more about AWS Athena [here](https://aws.amazon.com/athena)
-
-
-
-What is Amazon Cloud Directory?
-
-Amazon definition: "Amazon Cloud Directory is a highly available multi-tenant directory-based store in AWS. These directories scale automatically to hundreds of millions of objects as needed for applications."
-
-Learn more [here](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/what_is_cloud_directory.html)
-
-
-
-What is AWS Elastic Beanstalk?
-
-AWS definition: "AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services...You can simply upload your code and Elastic Beanstalk automatically handles the deployment"
-
-Learn more about it [here](https://aws.amazon.com/elasticbeanstalk)
-
-
-
-What is AWS SWF?
-
-Amazon definition: "Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud."
-
-Learn more on Amazon Simple Workflow Service [here](https://aws.amazon.com/swf)
-
-
-
-What is AWS EMR?
-
-AWS definition: "big data platform for processing vast amounts of data using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto."
-
-Learn more [here](https://aws.amazon.com/emr)
-
-
-
-What is AWS Quick Starts?
-
-AWS definition: "Quick Starts are built by AWS solutions architects and partners to help you deploy popular technologies on AWS, based on AWS best practices for security and high availability."
-
-Read more [here](https://aws.amazon.com/quickstart)
-
-
-
-What is the Trusted Advisor?
-
-
-
-What is AWS Service Catalog?
-
-Amazon definition: "AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS."
-
-Learn more [here](https://aws.amazon.com/servicecatalog)
-
-
-
-What is AWS CAF?
-
-Amazon definition: "AWS Professional Services created the AWS Cloud Adoption Framework (AWS CAF) to help organizations design and travel an accelerated path to successful cloud adoption. "
-
-Learn more [here](https://aws.amazon.com/professional-services/CAF)
-
-
-
-What is AWS Cloud9?
-
-AWS: "AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser"
-
-
-
-What is AWS CloudShell?
-
-AWS: "AWS CloudShell is a browser-based shell that makes it easy to securely manage, explore, and interact with your AWS resources."
-
-
-
-What is AWS Application Discovery Service?
-
-Amazon definition: "AWS Application Discovery Service helps enterprise customers plan migration projects by gathering information about their on-premises data centers."
-
-Learn more [here](https://aws.amazon.com/application-discovery)
-
-
-
-What is the AWS well-architected framework and what pillars it's based on?
-
-AWS definition: "The Well-Architected Framework has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization"
-
-Learn more [here](https://aws.amazon.com/architecture/well-architected)
-
-
-
-What AWS services are serverless (or have the option to be serverless)?
-
-AWS Lambda
-AWS Athena
-
-
-
-What is Simple Queue Service (SQS)?
-
-AWS definition: "Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications".
-
-Learn more about it [here](https://aws.amazon.com/sqs)
-
-
## Network
@@ -4851,3810 +2554,6 @@ Yes, it's a operating-system-level virtualization, where the kernel is shared an
The introduction of virtual machines allowed companies to deploy multiple business applications on the same hardware while each application is separated from each other in secured way, where each is running on its own separate operating system.
-## Ansible
-
-### Ansible Exercises
-
-|Name|Topic|Objective & Instructions|Solution|Comments|
-|--------|--------|------|----|----|
-| My First Task | Tasks | [Exercise](exercises/ansible/my_first_task.md) | [Solution](exercises/ansible/solutions/my_first_task.md)
-| Upgrade and Update Task | Tasks | [Exercise](exercises/ansible/update_upgrade_task.md) | [Solution](exercises/ansible/solutions/update_upgrade_task.md)
-| My First Playbook | Playbooks | [Exercise](exercises/ansible/my_first_playbook.md) | [Solution](exercises/ansible/solutions/my_first_playbook.md)
-
-
-### Ansible Self Assesment
-
-
-Describe each of the following components in Ansible, including the relationship between them:
-
- * Task
- * Module
- * Play
- * Playbook
- * Role
-
-
-Task – a call to a specific Ansible module
-Module – the actual unit of code executed by Ansible on your own host or a remote host. Modules are indexed by category (database, file, network, …) and also referred to as task plugins.
-
-Play – One or more tasks executed on a given host(s)
-
-Playbook – One or more plays. Each play can be executed on the same or different hosts
-
-Role – Ansible roles allows you to group resources based on certain functionality/service such that they can be easily reused. In a role, you have directories for variables, defaults, files, templates, handlers, tasks, and metadata. You can then use the role by simply specifying it in your playbook.
-
-
-
-How Ansible is different from other automation tools? (e.g. Chef, Puppet, etc.)
-
-Ansible is:
-
-* Agentless
-* Minimal run requirements (Python & SSH) and simple to use
-* Default mode is "push" (it supports also pull)
-* Focus on simpleness and ease-of-use
-
-
-
-
-True or False? Ansible follows the mutable infrastructure paradigm
-
-True. In immutable infrastructure approach, you'll replace infrastructure instead of modifying it.
-Ansible rather follows the mutable infrastructure paradigm where it allows you to change the configuration of different components, but this approach is not perfect and has its own disadvantges like "configuration drift" where different components may reach different state for different reasons.
-
-
-
-True or False? Ansible uses declarative style to describe the expected end state
-
-False. It uses a procedural style.
-
-
-
-What kind of automation you wouldn't do with Ansible and why?
-
-While it's possible to provision resources with Ansible, some prefer to use tools that follow immutable infrastructure paradigm.
-Ansible doesn't saves state by default. So a task that creates 5 instances for example, when executed again will create additional 5 instances (unless
-additional check is implemented or explicit names are provided) while other tools might check if 5 instances exist. If only 4 exist (by checking the state file for example), one additional instance will be created to reach the end goal of 5 instances.
-
-
-
-How do you list all modules and how can you see details on a specific module?
-
-1. Ansible online docs
-2. `ansible-doc -l` for list of modules and `ansible-doc [module_name]` for detailed information on a specific module
-
-
-#### Ansible - Inventory
-
-
-What is an inventory file and how do you define one?
-
-An inventory file defines hosts and/or groups of hosts on which Ansible tasks executed upon.
-
-An example of inventory file:
-
-```
-192.168.1.2
-192.168.1.3
-192.168.1.4
-
-[web_servers]
-190.40.2.20
-190.40.2.21
-190.40.2.22
-```
-
-
-
-What is a dynamic inventory file? When you would use one?
-
-A dynamic inventory file tracks hosts from one or more sources like cloud providers and CMDB systems.
-
-You should use one when using external sources and especially when the hosts in your environment are being automatically
-spun up and shut down, without you tracking every change in these sources.
-
-
-#### Ansible - Variables
-
-
-Modify the following task to use a variable instead of the value "zlib" and have "zlib" as the default in case the variable is not defined
-
-```
-- name: Install a package
- package:
- name: "zlib"
- state: present
-```
-
-
-```
-- name: Install a package
- package:
- name: "{{ package_name|default('zlib') }}"
- state: present
-```
-
-
-
-How to make the variable "use_var" optional?
-
-```
-- name: Install a package
- package:
- name: "zlib"
- state: present
- use: "{{ use_var }}"
-```
-
-
-
-With "default(omit)"
-```
-- name: Install a package
- package:
- name: "zlib"
- state: present
- use: "{{ use_var|default(omit) }}"
-```
-
-
-
-What would be the result of the following play?
-
-```
----
-- name: Print information about my host
- hosts: localhost
- gather_facts: 'no'
- tasks:
- - name: Print hostname
- debug:
- msg: "It's me, {{ ansible_hostname }}"
-```
-
-When given a written code, always inspect it thoroughly. If your answer is “this will fail” then you are right. We are using a fact (ansible_hostname), which is a gathered piece of information from the host we are running on. But in this case, we disabled facts gathering (gather_facts: no) so the variable would be undefined which will result in failure.
-
-
-
-When the value '2017'' will be used in this case: `{{ lookup('env', 'BEST_YEAR') | default('2017', true) }}`?
-
-when the environment variable 'BEST_YEAR' is empty or false.
-
-
-
-If the value of certain variable is 1, you would like to use the value "one", otherwise, use "two". How would you do it?
-
-`{{ (certain_variable == 1) | ternary("one", "two") }}`
-
-
-
-The value of a certain variable you use is the string "True". You would like the value to be a boolean. How would you cast it?
-
-`{{ some_string_var | bool }}`
-
-
-
-You want to run Ansible playbook only on specific minor version of your OS, how would you achieve that?
-
-
-
-What the "become" directive used for in Ansible?
-
-
-
-What are facts? How to see all the facts of a certain host?
-
-
-
-What would be the result of running the following task? How to fix it?
-
-```
-- hosts: localhost
- tasks:
- - name: Install zlib
- package:
- name: zlib
- state: present
-```
-
-
-
-
-Which Ansible best practices are you familiar with?. Name at least three
-
-
-
-Explain the directory layout of an Ansible role
-
-
-
-What 'blocks' are used for in Ansible?
-
-
-
-How do you handle errors in Ansible?
-
-
-
-You would like to run a certain command if a task fails. How would you achieve that?
-
-
-
-Write a playbook to install ‘zlib’ and ‘vim’ on all hosts if the file ‘/tmp/mario’ exists on the system.
-
-```
----
-- hosts: all
- vars:
- mario_file: /tmp/mario
- package_list:
- - 'zlib'
- - 'vim'
- tasks:
- - name: Check for mario file
- stat:
- path: "{{ mario_file }}"
- register: mario_f
-
- - name: Install zlib and vim if mario file exists
- become: "yes"
- package:
- name: "{{ item }}"
- state: present
- with_items: "{{ package_list }}"
- when: mario_f.stat.exists
-```
-
-
-
-Write a single task that verifies all the files in files_list variable exist on the host
-
-```
-- name: Ensure all files exist
- assert:
- that:
- - item.stat.exists
- loop: "{{ files_list }}"
-```
-
-
-
-Write a playbook to deploy the file ‘/tmp/system_info’ on all hosts except for controllers group, with the following content
-
- ```
- I'm and my operating system is
- ```
-
- Replace and with the actual data for the specific host you are running on
-
-The playbook to deploy the system_info file
-
-```
----
-- name: Deploy /tmp/system_info file
- hosts: all:!controllers
- tasks:
- - name: Deploy /tmp/system_info
- template:
- src: system_info.j2
- dest: /tmp/system_info
-```
-
-The content of the system_info.j2 template
-
-```
-# {{ ansible_managed }}
-I'm {{ ansible_hostname }} and my operating system is {{ ansible_distribution }
-```
-
-
-
-The variable 'whoami' defined in the following places:
-
- * role defaults -> whoami: mario
- * extra vars (variables you pass to Ansible CLI with -e) -> whoami: toad
- * host facts -> whoami: luigi
- * inventory variables (doesn’t matter which type) -> whoami: browser
-
-According to variable precedence, which one will be used?
-
-The right answer is ‘toad’.
-
-Variable precedence is about how variables override each other when they set in different locations. If you didn’t experience it so far I’m sure at some point you will, which makes it a useful topic to be aware of.
-
-In the context of our question, the order will be extra vars (always override any other variable) -> host facts -> inventory variables -> role defaults (the weakest).
-
-Here is the order of precedence from least to greatest (the last listed variables winning prioritization):
-
-1. command line values (eg “-u user”)
-2. role defaults [[1\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id15)
-3. inventory file or script group vars [[2\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id16)
-4. inventory group_vars/all [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
-5. playbook group_vars/all [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
-6. inventory group_vars/* [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
-7. playbook group_vars/* [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
-8. inventory file or script host vars [[2\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id16)
-9. inventory host_vars/* [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
-10. playbook host_vars/* [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
-11. host facts / cached set_facts [[4\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id18)
-12. play vars
-13. play vars_prompt
-14. play vars_files
-15. role vars (defined in role/vars/main.yml)
-16. block vars (only for tasks in block)
-17. task vars (only for the task)
-18. include_vars
-19. set_facts / registered vars
-20. role (and include_role) params
-21. include params
-22. extra vars (always win precedence)
-
-A full list can be found at [PlayBook Variables](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#ansible-variable-precedence) . Also, note there is a significant difference between Ansible 1.x and 2.x.
-
-
-
-For each of the following statements determine if it's true or false:
-
- * A module is a collection of tasks
- * It’s better to use shell or command instead of a specific module
- * Host facts override play variables
- * A role might include the following: vars, meta, and handlers
- * Dynamic inventory is generated by extracting information from external sources
- * It’s a best practice to use indention of 2 spaces instead of 4
- * ‘notify’ used to trigger handlers
- * This “hosts: all:!controllers” means ‘run only on controllers group hosts
-
-
-
-Explain the Diffrence between Forks and Serial & Throttle.
-
-`Serial` is like running the playbook for each host in turn, waiting for completion of the complete playbook before moving on to the next host. `forks`=1 means run the first task in a play on one host before running the same task on the next host, so the first task will be run for each host before the next task is touched. Default fork is 5 in ansible.
-
-```
-[defaults]
-forks = 30
-```
-
-```
-- hosts: webservers
- serial: 1
- tasks:
- - name: ...
-```
-
-Ansible also supports `throttle` This keyword limits the number of workers up to the maximum set via the forks setting or serial. This can be useful in restricting tasks that may be CPU-intensive or interact with a rate-limiting API
-
-```
-tasks:
-- command: /path/to/cpu_intensive_command
- throttle: 1
-```
-
-
-
-
-What is ansible-pull? How is it different from how ansible-playbook works?
-
-
-
-What is Ansible Vault?
-
-
-
-Demonstrate each of the following with Ansible:
-
- * Conditionals
- * Loops
-
-
-
-
-What are filters? Do you have experience with writing filters?
-
-
-
-Write a filter to capitalize a string
-
-```
-def cap(self, string):
- return string.capitalize()
-```
-
-
-
-You would like to run a task only if previous task changed anything. How would you achieve that?
-
-
-
-What are callback plugins? What can you achieve by using callback plugins?
-
-
-
-What is Ansible Collections?
-
-
-
-What is the difference between `include_task` and `import_task`?
-
-
-
-File '/tmp/exercise' includes the following content
-
-```
-Goku = 9001
-Vegeta = 5200
-Trunks = 6000
-Gotenks = 32
-```
-
-With one task, switch the content to:
-
-```
-Goku = 9001
-Vegeta = 250
-Trunks = 40
-Gotenks = 32
-```
-
-
-```
-- name: Change saiyans levels
- lineinfile:
- dest: /tmp/exercise
- regexp: "{{ item.regexp }}"
- line: "{{ item.line }}"
- with_items:
- - { regexp: '^Vegeta', line: 'Vegeta = 250' }
- - { regexp: '^Trunks', line: 'Trunks = 40' }
- ...
-```
-
-
-
-#### Ansible - Execution and Strategy
-
-
-True or False? By default, Ansible will execute all the tasks in play on a single host before proceeding to the next host
-
-False. Ansible will execute a single task on all hosts before moving to the next task in a play. As for today, it uses 5 forks by default.
-This behaviour is described as "strategy" in Ansible and it's configurable.
-
-
-
-What is a "strategy" in Ansible? What is the default strategy?
-
-A strategy in Ansible describes how Ansible will execute the different tasks on the hosts. By default Ansible is using the "Linear strategy" which defines that each task will run on all hosts before proceeding to the next task.
-
-
-
-What strategies are you familiar with in Ansible?
-
- - Linear: the default strategy in Ansible. Run each task on all hosts before proceeding.
- - Free: For each host, run all the tasks until the end of the play as soon as possible
- - Debug: Run tasks in an interactive way
-
-
-
-What the serial
keyword is used for?
-
-It's used to specify the number (or percentage) of hosts to run the full play on, before moving to next number of hosts in the group.
-
-For example:
-```
-- name: Some play
- hosts: databases
- serial: 4
-```
-
-If your group has 8 hosts. It will run the whole play on 4 hosts and then the same play on another 4 hosts.
-
-
-#### Ansible Testing
-
-
-How do you test your Ansible based projects?
-
-
-
-What is Molecule? How does it works?
-
-
-
-You run Ansibe tests and you get "idempotence test failed". What does it mean? Why idempotence is important?
-
-
-#### Ansible - Debugging
-
-
-How to find out the data type of a certain variable in one of the playbooks?
-
-"{{ some_var | type_debug }}"
-
-
-#### Ansible - Collections
-
-
-What are collections in Ansible?
-
-
-## Terraform
-
-
-Explain what Terraform is and how does it works
-
-[Terraform.io](https://www.terraform.io/intro/index.html#what-is-terraform-): "Terraform is an infrastructure as code (IaC) tool that allows you to build, change, and version infrastructure safely and efficiently."
-
-
-
-Why one would prefer using Terraform and not other technologies? (e.g. Ansible, Puppet, CloudFormation)
-
-A common *wrong* answer is to say that Ansible and Puppet are configuration management tools
-and Terraform is a provisioning tool. While technically true, it doesn't mean Ansible and Puppet can't
-be used for provisioning infrastructure. Also, it doesn't explain why Terraform should be used over
-CloudFormation if at all.
-
-The benefits of Terraform over the other tools:
-
- * It follows the immutable infrastructure approach which has benefits like avoiding a configuration drift over time
- * Ansible and Puppet are more procedural (you mention what to execute in each step) and Terraform is declarative since you describe the overall desired state and not per resource or task. You can give the example of going from 1 to 2 servers in each tool. In Terraform you specify 2, in Ansible and puppet you have to only provision 1 additional server so you need to explicitly make sure you provision only another one server.
-
-
-
-How do you structure your Terraform projects?
-
-terraform_directory
- providers.tf -> List providers (source, version, etc.)
- variables.tf -> any variable used in other files such as main.tf
- main.tf -> Lists the resources
-
-
-
-True or False? Terraform follows the mutable infrastructure paradigm
-
-False. Terraform follows immutable infrastructure paradigm.
-
-
-
-True or False? Terraform uses declarative style to describe the expected end state
-True
-
-
-
-What is HCL?
-HCL stands for Hashicorp Configuration Language. It is the language Hashicorp made to use as the configuration language for a number of its tools, including terraform.
-
-
-
-Explain what is "Terraform configuration"
-
-A configuration is a root module along with a tree of child modules that are called as dependencies from the root module.
-
-
-
-Explain what the following commands do:
-
- * terraform init
- * terraform plan
- * terraform validate
- * terraform apply
-
-
-terraform init
scans your code to figure which providers are you using and download them.
-terraform plan
will let you see what terraform is about to do before actually doing it.
-terraform validate
checks if configuration is syntactically valid and internally consistent within a directory.
-terraform apply
will provision the resources specified in the .tf files.
-
-
-#### Terraform - Resources
-
-
-What is a "resource"?
-
-HashiCorp: "Terraform uses resource blocks to manage infrastructure, such as virtual networks, compute instances, or higher-level components such as DNS records. Resource blocks represent one or more infrastructure objects in your Terraform configuration."
-
-
-
-Explain each part of the following line: `resource "aws_instance" "web_server" {...}`
-
- - resource: keyword for defining a resource
- - "aws_instance": the type of the resource
- - "web_server": the name of the resource
-
-
-
-What is the ID of the following resource: `resource "aws_instance" "web_server" {...}`
-
-`aws_instance.web_server`
-
-
-
-True or False? Resource ID must be unique within a workspace
-
-True
-
-
-
-Explain each of the following in regards to resources
-
- * Arguments
- * Attributes
- * Meta-arguments
-
- - Arguments: resource specific configurations
- - Attributes: values exposed by the resource in a form of `resource_type.resource_name.attribute_name`. They are set by the provider or API usually.
- - Meta-arguments: Functions of Terraform to change resource's behaviour
-
-
-#### Terraform - Providers
-
-
-Explain what is a "provider"
-
-[terraform.io](https://www.terraform.io/docs/language/providers/index.html): "Terraform relies on plugins called "providers" to interact with cloud providers, SaaS providers, and other APIs...Each provider adds a set of resource types and/or data sources that Terraform can manage. Every resource type is implemented by a provider; without providers, Terraform can't manage any kind of infrastructure."
-
-
-
-What is the name of the provider in this case: `resource "libvirt_domain" "instance" {...}`
-
-libvirt
-
-
-#### Terraform - Variables
-
-
-What are Input Variables in Terraform? Why one should use them?
-
-Input variables serve as parameters to the module in Terraform. They allow you for example to define once the value of a variable and use that variable in different places in the module so next time you would want to change the value, you will change it in one place instead of changing the value in different places in the module.
-
-
-
-How to define variables?
-
-```
-variable "app_id" {
- type = string
- description = "The id of application"
- default = "some_value"
-}
-```
-
-Usually they are defined in their own file (vars.tf for example).
-
-
-
-How variables are used in modules?
-
-They are referenced with `var.VARIABLE_NAME`
-
-vars.tf:
-
-```
-variable "memory" {
- type = string
- default "8192"
-}
-
-variable "cpu" {
- type = string
- default = "4"
-}
-```
-
-main.tf:
-
-```
-resource "libvirt_domain" "vm1" {
- name = "vm1"
- memory = var.memory
- cpu = var.cpu
-}
-```
-
-
-
-How would you enforce users that use your variables to provide values with certain constraints? For example, a number greater than 1
-
-Using `validation` block
-
-```
-variable "some_var" {
- type = number
-
- validation {
- condition = var.some_var > 1
- error_message = "you have to specify a number greater than 1"
- }
-
-}
-```
-
-
-
-What is the effect of setting variable as "sensitive"?
-
-It doesn't show its value when you run `terraform apply` or `terraform plan` but eventually it's still recorded in the state file.
-
-
-
-True or Fales? If an expression's result depends on a sensitive variable, it will be treated as sensitive as well
-
-True
-
-
-
-The same variable is defined in the following places:
-
- - The file `terraform.tfvars`
- - Environment variable
- - Using `-var` or `-var-file`
-
-According to varaible precedence, which source will be used first?
-
-The order is:
-
- - Environment variable
- - The file `terraform.tfvars`
- - Using `-var` or `-var-file`
-
-
-
-What other way is there to define lots of variables in more "simplified" way?
-
-Using `.tfvars` file which contains variable consists of simple variable names assignments this way:
-
-```
-x = 2
-y = "mario"
-z = "luigi"
-```
-
-
-#### Terraform - State
-
-
-What terraform.tfstate
file is used for?
-
-It keeps track of the IDs of created resources so that Terraform knows what it's managing.
-
-
-
-How do you rename an existing resource?
-
-terraform state mv
-
-
-
-Why does it matter where you store the tfstate file? Where would you store it?
-
- - tfstate contains credentials in plain text. You don't want to put it in publicly shared location
- - tfstate shouldn't be modified concurrently so putting it in a shared location available for everyone with "write" permissions might lead to issues. (Terraform remote state doesn't has this problem).
- - tfstate is in important file. As such, it might be better to put it in a location that has regular backups.
-
-As such, tfstate shouldn't be stored in git repositories. secured storage such as secured buckets, is a better option.
-
-
-
-Which command is responsible for creating state file?
-
- - terraform apply file.terraform
- - Above command will create tfstate file in the working folder.
-
-
-
-By default where does the state get stored?
-
- - The state is stored by default in a local file named terraform.tfstate.
-
-
-
-Can we store tfstate file at remote location? If yes, then in which condition you will do this?
-
- - Yes, It can also be stored remotely, which works better in a team environment. Given condition that remote location is not publicly accessible since tfstate file contain sensitive information as well. Access to this remote location must be only shared with team members.
-
-
-
-Mention some best practices related to tfstate
-
- - Don't edit it manually. tfstate was designed to be manipulated by terraform and not by users directly.
- - Store it in secured location (since it can include credentials and sensitive data in general)
- - Backup it regularly so you can roll-back easily when needed
- - Store it in remote shared storage. This is especially needed when working in a team and the state can be updated by any of the team members
- - Enabled versioning if the storage where you store the state file, supports it. Versioning is great for backups and roll-backs in case of an issue.
-
-
-
-How and why concurrent edits of the state file should be avoided?
-
-If there are two users or processes concurrently editing the state file it can result in invalid state file that doesn't actually represents the state of resources.
-
-To avoid that, Terraform can apply state locking if the backend supports that. For example, AWS s3 supports state locking and consistency via DynamoDB. Often, if the backend support it, Terraform will make use of state locking automatically so nothing is required from the user to activate it.
-
-
-
-Describe how you manage state file(s) when you have multiple environments (e.g. development, staging and production)
-
-There is no right or wrong here, but it seems that the overall preferred way is to have a dedicated state file per environment.
-
-
-
-How to write down a variable which changes by an external source or during terraform apply
?
-
-You use it this way: variable “my_var” {}
-
-
-
-You've deployed a virtual machine with Terraform and you would like to pass data to it (or execute some commands). Which concept of Terraform would you use?
-
-[Provisioners](https://www.terraform.io/docs/language/resources/provisioners)
-
-
-#### Terraform - Provisioners
-
-
-What are "Provisioners"? What they are used for?
-
-Provisioners used to execute actions on local or remote machine. It's extremely useful in case you provisioned an instance and you want to make a couple of changes in the machine you've created without manually ssh into it after Terraform finished to run and manually run them.
-
-
-
-What is local-exec
and remote-exec
in the context of provisioners?
-
-
-
-What is a "tainted resource"?
-
-It's a resource which was successfully created but failed during provisioning. Terraform will fail and mark this resource as "tainted".
-
-
-
-What terraform taint
does?
-terraform taint resource.id
manually marks the resource as tainted in the state file. So when you run terraform apply
the next time, the resource will be destroyed and recreated.
-
-
-
-What types of variables are supported in Terraform?
-
-string
-number
-bool
-list()
-set()
-map()
-object({ = , ... })
-tuple([, ...])
-
-
-
-What is a data source? In what scenarios for example would need to use it?
-Data sources lookup or compute values that can be used elsewhere in terraform configuration.
-
-There are quite a few cases you might need to use them:
-* you want to reference resources not managed through terraform
-* you want to reference resources managed by a different terraform module
-* you want to cleanly compute a value with typechecking, such as with aws_iam_policy_document
-
-
-
-What are output variables and what terraform output
does?
-Output variables are named values that are sourced from the attributes of a module. They are stored in terraform state, and can be used by other modules through remote_state
-
-
-
-Explain Modules
-
-A Terraform module is a set of Terraform configuration files in a single directory. Modules are small, reusable Terraform configurations that let you manage a group of related resources as if they were a single resource. Even a simple configuration consisting of a single directory with one or more .tf files is a module. When you run Terraform commands directly from such a directory, it is considered the root module. So in this sense, every Terraform configuration is part of a module.
-
-
-
-What is the Terraform Registry?
-
-The Terraform Registry provides a centralized location for official and community-managed providers and modules.
-
-
-
-Explain remote-exec
and local-exec
-
-
-
-
-Explain "Remote State". When would you use it and how?
- Terraform generates a `terraform.tfstate` json file that describes components/service provisioned on the specified provider. Remote
- State stores this file in a remote storage media to enable collaboration amongst team.
-
-
-
-Explain "State Locking"
- State locking is a mechanism that blocks an operations against a specific state file from multiple callers so as to avoid conflicting operations from different team members. Once the first caller's operation's lock is released the other team member may go ahead to
- carryout his own operation. Nevertheless Terraform will first check the state file to see if the desired resource already exist and
- if not it goes ahead to create it.
-
-
-
-What is the "Random" provider? What is it used for
- The random provider aids in generating numeric or alphabetic characters to use as a prefix or suffix for a desired named identifier.
-
-
-
-How do you test a terraform module?
- Many examples are acceptable, but the most common answer would likely to be using the tool terratest
, and to test that a module can be initialized, can create resources, and can destroy those resources cleanly.
-
-
-
-Aside from .tfvars
files or CLI arguments, how can you inject dependencies from other modules?
- The built-in terraform way would be to use remote-state
to lookup the outputs from other modules.
- It is also common in the community to use a tool called terragrunt
to explicitly inject variables between modules.
-
-
-
-What is Terraform import?
-
-Terraform import is used to import existing infrastucture. It allows you to bring resources created by some other means (eg. manually launched cloud resources) and bring it under Terraform management.
-
-
-
-How do you import existing resource using Terraform import?
-
-1. Identify which resource you want to import.
-2. Write terraform code matching configuration of that resource.
-3. Run terraform command terraform import RESOURCE ID
-
-eg. Let's say you want to import an aws instance. Then you'll perform following:
-1. Identify that aws instance in console
-2. Refer to it's configuration and write Terraform code which will look something like:
-```
-resource "aws_instance" "tf_aws_instance" {
- ami = data.aws_ami.ubuntu.id
- instance_type = "t3.micro"
-
- tags = {
- Name = "import-me"
- }
-}
-```
-3. Run terraform command terraform import aws_instance.tf_aws_instance i-12345678
-
-
-## Containers
-
-### Containers Exercises
-
-|Name|Topic|Objective & Instructions|Solution|Comments|
-|--------|--------|------|----|----|
-|Running Containers|Intro|[Exercise](exercises/containers/running_containers.md)|[Solution](exercises/containers/solutions/running_containers.md)
-|Working with Images|Image|[Exercise](exercises/containers/working_with_images.md)|[Solution](exercises/containers/solutions/working_with_images.md)
-|My First Dockerfile|Dockerfile|[Exercise](exercises/containers/write_dockerfile_run_container.md)|
-|Run, Forest, Run!|Restart Policies|[Exercise](exercises/containers/run_forest_run.md)|[Solution](exercises/containers/solutions/run_forest_run.md)
-|Layer by Layer|Image Layers|[Exercise](exercises/containers/image_layers.md)|[Solution](exercises/containers/solutions/image_layers.md)
-|Containerize an application | Containerization |[Exercise](exercises/containers/containerize_app.md)|[Solution](exercises/containers/solutions/containerize_app.md)
-|Multi-Stage Builds|Multi-Stage Builds|[Exercise](exercises/containers/multi_stage_builds.md)|[Solution](exercises/containers/solutions/multi_stage_builds.md)
-
-### Containers Self Assesment
-
-
-What is a Container?
-
-This can be tricky to answer since there are many ways to create a containers:
-
- - Docker
- - systemd-nspawn
- - LXC
-
-If to focus on OCI (Open Container Initiative) based containers, it offers the following [definition](https://github.com/opencontainers/runtime-spec/blob/master/glossary.md#container): "An environment for executing processes with configurable isolation and resource limitations. For example, namespaces, resource limits, and mounts are all part of the container environment."
-
-
-
-Why containers are needed? What is their goal?
-
-OCI provides a good [explanation](https://github.com/opencontainers/runtime-spec/blob/master/principles.md#the-5-principles-of-standard-containers): "Define a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in a format that is self-describing and portable, so that any compliant runtime can run it without extra dependencies, regardless of the underlying machine and the contents of the container."
-
-
-
-How are containers different from virtual machines (VMs)?
-
-The primary difference between containers and VMs is that containers allow you to virtualize
-multiple workloads on a single operating system while in the case of VMs, the hardware is being virtualized to run multiple machines each with its own guest OS.
-You can also think about it as containers are for OS-level virtualization while VMs are for hardware virtualization.
-
-* Containers don't require an entire guest operating system as VMs. Containers share the system's kernel as opposed to VMs. They isolate themselves via the use of kernel's features such as namespaces and cgroups
-* It usually takes a few seconds to set up a container as opposed to VMs which can take minutes or at least more time than containers as there is an entire OS to boot and initialize as opposed to containers which has share of the underlying OS
-* Virtual machines considered to be more secured than containers
-* VMs portability considered to be limited when compared to containers
-
-
-
-Do we need virtual machines in the edge of containers? Are they still relevant?
-
-
-
-In which scenarios would you use containers and in which you would prefer to use VMs?
-
-You should choose VMs when:
- * You need run an application which requires all the resources and functionalities of an OS
- * You need full isolation and security
-
-You should choose containers when:
- * You need a lightweight solution
- * Running multiple versions or instances of a single application
-
-
-
-Describe the process of containerizing an application
-
-1. Write a Dockerfile that includes your app (including the commands to run it) and its dependencies
-2. Build the image using the Dockefile you wrote
-3. You might want to push the image to a registry
-4. Run the container using the image you've built
-
-
-#### Containers - OCI
-
-
-What is the OCI?
-
-OCI (Open Container Initiative) is an open governance established in 2015 to standardize container creation - mostly image format and runtime. At that time there were a number of parties involved and the most prominent one was Docker.
-
-Specifications published by OCI:
-
- - [image-spec](https://github.com/opencontainers/image-spec)
- - [runtime-spec](https://github.com/opencontainers/runtime-spec)
-
-
-
-Which operations OCI based containers must support?
-
-Create, Kill, Delete, Start and Query State.
-
-
-#### Containers - Basic Commands
-
-
-How to list all the containers on a given host?
-
-In the case of Docker, use: `docker container ls`
-In the case of Podman, it's not very different: `podman container ls`
-
-
-
-How to run a container?
-
-Docker: `docker container run ubuntu`
-Podman: `podman container run ubuntu`
-
-
-
-Why after running podman container run ubuntu
the output of podman container ls
is empty?
-
-Because the container immediately exits after running the ubuntu image. This is completely normal and expected as containers designed to run a service or a app and exit when they are done running it.
-
-If you want the container to keep running, you can run a command like `sleep 100` which will run for 100 seconds or you can attach to terminal of the container with a command similar: `podman container run -it ubuntu /bin/bash`
-
-
-
-How to attach your shell to a terminal of a running container?
-
-`podman container exec -it [container id/name] bash`
-
-This can be done in advance while running the container: `podman container run -it [image:tag] /bin/bash`
-
-
-
-True or False? You can remove a running container if it doesn't running anything
-
-False. You have to stop the container before removing it.
-
-
-
-How to stop and remove a container?
-
-`podman container stop && podman container rm `
-
-
-
-What happens when you run docker container run ubuntu
?
-
-1. Docker client posts the command to the API server running as part of the Docker daemon
-2. Docker daemon checks if a local image exists
- 1. If it exists, it will use it
- 2. If doesn't exists, it will go to the remote registry (Docker Hub by default) and pull the image locally
-3. containerd and runc are instructed (by the daemon) to create and start the container
-
-
-
-How to run a container in the background?
-
-With the -d flag. It will run in the background and will not attach it to the terminal.
-
-`docker container run -d httpd` or `podman container run -d httpd`
-
-
-#### Containers - Images
-
-
-What is a container image?
-
-* An image of a container contains the application, its dependencies and the operating system where the application is executed.
-* It's a collection of read-only layers. These layers are loosely coupled
- * Each layer is assembled out of one or more files
-
-
-
-Why container images are relatively small?
-
-* Most of the images don't contain Kernel. They share and access the one used by the host on which they are running
-* Containers intended to run specific application in most cases. This means they hold only what the application needs in order to run
-
-
-
-How to list the container images on certain host?
-
-`podman image ls`
-`docker image ls`
-
-Depends on which containers engine you use.
-
-
-
-How the centralized location, where images are stored, is called?
-
-Registry
-
-
-
-A registry contains one or more ____
which in turn contain one or more ____
-
-A registry contains one or more repositories which in turn contain one or more images.
-
-
-
-How to find out which registry do you use by default from your environment?
-
-Depends on the containers technology you are using. For example, in case of Docker, it can be done with `docker info`
-
-```
-> docker info
-Registry: https://index.docker.io/v1
-```
-
-
-
-How to retrieve the latest ubuntu image?
-
-`docker image pull ubuntu:latest`
-
-
-
-True or False? It's not possible to remove an image if a certain container is using it
-
-True. You should stop and remove the container before trying to remove the image it uses.
-
-
-
-True or False? If a tag isn't specified when pulling an image, the 'latest' tag is being used
-
-True
-
-
-
-True or False? Using the 'latest' tag when pulling an image means, you are pulling the most recently published image
-
-False. While this might be true in some cases, it's not guaranteed that you'll pull the latest published image when using the 'latest' tag.
-For example, in some images, 'edge' tag is used for the most recently published images.
-
-
-
-Where pulled images are stored?
-
-Depends on the container technology being used. For example, in case of Docker, images are stored in `/var/lib/docker/`
-
-
-
-Explain container image layers
-
- - The layers of an image is where all the content is stored - code, files, etc.
- - Each layer is independent
- - Each layer has an ID that is an hash based on its content
- - The layers (as the image) are immutable which means a change to one of the layers can be easily identified
-
-
-
-True or False? Changing the content of any of the image layers will cause the hash content of the image to change
-
-True. These hashes are content based and since images (and their layers) are immutable, any change will cause the hashes to change.
-
-
-
-How to list the layers of an image?
-
-In case of Docker, you can use `docker image inspect `
-
-
-
-True or False? In most cases, container images contain their own kernel
-
-False. They share and access the one used by the host on which they are running.
-
-
-
-True or False? A single container image can have multiple tags
-
-True. When listing images, you might be able to see two images with the same ID but different tags.
-
-
-
-What is a dangling image?
-
-It's an image without tags attached to it.
-One way to reach this situation is by building an image with exact same name and tag as another already existing image. It can be still referenced by using its full SHA.
-
-
-
-How to see changes done to a given image over time?
-
-In the case of Docker, you could use `docker history `
-
-
-
-True or False? Multiple images can share layers
-
-True.
-One evidence for that can be found in pulling images. Sometimes when you pull an image, you'll see a line similar to the following:
-`fa20momervif17: already exists`
-
-This is because it recognizes such layer already exists on the host, so there is no need to pull the same layer twice.
-
-
-
-What is the digest of an image? What problem does it solves?
-
-Tags are mutable. This is mean that we can have two different images with the same name and the same tag. It can be very confusing to see two images with the same name and the same tag in your environment. How would you know if they are truly the same or are they different?
-
-This is where "digests` come handy. A digest is a content-addressable identifier. It isn't mutable as tags. Its value is predictable and this is how you can tell if two images are the same content wise and not merely by looking at the name and the tag of the images.
-
-
-
-True or False? A single image can support multiple architectures (Linux x64, Windows x64, ...)
-
-True.
-
-
-
-What is a distribution hash in regards to layers?
-
- - Layers are compressed when pushed or pulled
- - distribution hash is the hash of the compressed layer
- - the distribution hash used when pulling or pushing images for verification (making sure no one tempered with image or layers)
- - It's also used for avoiding ID collisions (a case where two images have exactly the same generated ID)
-
-
-
-How multi-architecture images work? Explain by describing what happens when an image is pulled
-
-1. A client makes a call to the registry to use a specific image (using an image name and optionally a tag)
-2. A manifest list is parsed (assuming it exists) to check if the architecture of the client is supported and available as a manifest
-3. If it is supported (a manifest for the architecture is available) the relevant manifest is parsed to obtain the IDs of the layers
-4. Each layer is then pulled using the obtained IDs from the previous step
-
-
-
-How to check which architectures a certain container image supports?
-
-`docker manifest inspect `
-
-
-
-How to check what a certain container image will execute once we'll run a container based on that image?
-
-Look for "Cmd" or "Entrypoint" fields in the output of `docker image inspec `
-
-
-
-How to view the instructions that were used to build image?
-
-`docker image history :`
-
-
-
-How docker image build
works?
-
-1. Docker spins up a temporary container
-2. Runs a single instruction in the temporary container
-3. Stores the result as a new image layer
-4. Remove the temporary container
-5. Repeat for every instruction
-
-
-
-What is the role of cache in image builds?
-
-When you build an image for the first time, the different layers are being cached. So, while the first build of the image might take time, any other build of the same image (given that Dockerfile didn't change or the content used by the instructions) will be instant thanks to the caching mechanism used.
-
-In little bit more details, it works this way:
-1. The first instruction (FROM) will check if base image already exists on the host before pulling it
-2. For the next instruction, it will check in the build cache if an existing layer was built from the same base image + if it used the same instruction
- 1. If it finds such layer, it skips the instruction and links the existing layer and it keeps using the cache.
- 2. If it doesn't find a matching layer, it builds the layer and the cache is invalidated.
-
-Note: in some cases (like COPY and ADD instructions) the instruction might stay the same but if the content of what being copied is changed then the cache is invalidated. The way this check is done is by comparing the checksum of each file that is being copied.
-
-
-
-What ways are there to reduce container images size?
-
- * Reduce number of instructions - in some case you may be able to join layers by installing multiple packages with one instructions for example or using `&&` to concatenate RUN instructions
- * Using smaller images - in some cases you might be using images that contain more than what is needed for your application to run. It is good to get overview of some images and see whether you can use smaller images that you are usually using.
- * Cleanup after running commands - some commands, like packages installation, create some metadata or cache that you might not need for running the application. It's important to clean up after such commands to reduce the image size
- * For Docker images, you can use multi-stage builds
-
-
-
-What are the pros and cons of squashing images?
-
-Pros:
- * Smaller image
- * Reducing number of layers (especially if the image has lot of layers)
-Cons:
- * No sharing of the image layers
- * Push and pull can take more time (because no matching layers found on target)
-
-
-#### Containers - Volume
-
-
-How to create a new volume?
-
-`docker volume create some_volume`
-
-
-#### Containers - Dockerfile
-
-
-What is a Dockerfile?
-
-Different container engines (e.g. Docker, Podman) can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text file that contains all the instructions for building an image which containers can use.
-
-
-
-What is the instruction in all Dockefiles and what does it mean?
-
-The first instruction is `FROM `
-It specifies the base layer of the image to be used. Every other instruction is a layer on top of that base image.
-
-
-
-List five different instructions that are available for use in a Dockerfile
-
- * WORKDIR: sets the working directory inside the image filesystems for all the instructions following it
- * EXPOSE: exposes the specified port (it doesn't adds a new layer, rather documented as image metadata)
- * ENTRYPOINT: specifies the startup commands to run when a container is started from the image
- * ENV: sets an environment variable to the given value
- * USER: sets the user (and optionally the user group) to use while running the image
-
-
-
-What are some of the best practices regarding container images and Dockerfiles that you are following?
-
- * Include only the packages you are going to use. Nothing else.
- * Specify a tag in FROM instruction. Not using a tag means you'll always pull the latest, which changes over time and might result in unexpected result.
- * Do not use environment variables to share secrets
- * Use images from official repositories
- * Keep images small! - you want them only to include what is required for the application to run successfully. Nothing else.
- * If are using the apt package manager, you might use 'no-install-recommends' with `apt-get install` to install only main dependencies (instead of suggested, recommended packages)
-
-
-
-What is the "build context"?
-
-[Docker docs](https://docs.docker.com/engine/reference/commandline/build): "A build’s context is the set of files located in the specified PATH or URL"
-
-
-
-What is the difference between ADD and COPY in Dockerfile?
-
-COPY takes in a source and destination. It lets you copy in a file or directory from the build context into the Docker image itself.
-ADD lets you do the same, but it also supports two other sources. You can use a URL instead of a file or directory from the build context. In addition, you can extract a tar file from the source directly into the destination.
-
-Although ADD and COPY are functionally similar, generally speaking, COPY is preferred. That’s because it’s more transparent than ADD. COPY only supports the basic copying of files from build context into the container, while ADD has some features (like local-only tar extraction and remote URL support) that are not immediately obvious.
-
-
-
-What is the difference between CMD and RUN in Dockerfile?
-
-RUN lets you execute commands inside of your Docker image. These commands get executed once at build time and get written into your Docker image as a new layer.
-CMD is the command the container executes by default when you launch the built image. A Dockerfile can only have one CMD.
-You could say that CMD is a Docker run-time operation, meaning it’s not something that gets executed at build time. It happens when you run an image. A running image is called a container.
-
-
-
-How to create a new image using a Dockerfile?
-
-The following command is executed from within the directory where Dockefile resides:
-
-`docker image build -t some_app:latest .`
-`podman image build -t some_app:latest .`
-
-
-
-Do you perform any checks or testing on your Dockerfiles?
-
-One option is to use [hadolint](https://github.com/hadolint/hadolint) project which is a linter based on Dockerfile best practices.
-
-
-
-Which instructions in Dockerfile create new layers?
-
-Instructions such as FROM, COPY and RUN, create new image layers instead of just adding metadata.
-
-
-
-Which instructions in Dockerfile create image metadata and don't create new layers?
-
-Instructions such as ENTRYPOINT, ENV, EXPOSE, create image metadata and they don't create new layers.
-
-
-
-Is it possible to identify which instruction create a new layer from the output of docker image history
?
-
-
-#### Containers - Architecture
-
-
-How container achieve isolation from the rest of the system?
-
-Through the use of namespaces and cgroups. Linux kernel has several types of namespaces:
-
- - Process ID namespaces: these namespaces include independent set of process IDs
- - Mount namespaces: Isolation and control of mountpoints
- - Network namespaces: Isolates system networking resources such as routing table, interfaces, ARP table, etc.
- - UTS namespaces: Isolate host and domains
- - IPC namespaces: Isolates interprocess communications
- - User namespaces: Isolate user and group IDs
- - Time namespaces: Isolates time machine
-
-
-
-Describe in detail what happens when you run `podman/docker run hello-world`?
-
-Docker/Podman CLI passes your request to Docker daemon.
-Docker/Podman daemon downloads the image from Docker Hub
-Docker/Podman daemon creates a new container by using the image it downloaded
-Docker/Podman daemon redirects output from container to Docker CLI which redirects it to the standard output
-
-
-
-Describe difference between cgroups and namespaces
-cgroup: Control Groups provide a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour.
-namespace: wraps a global system resource in an abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of the global resource.
-
-In short:
-
-Cgroups = limits how much you can use;
-namespaces = limits what you can see (and therefore use)
-
-Cgroups involve resource metering and limiting:
-memory
-CPU
-block I/O
-network
-
-Namespaces provide processes with their own view of the system
-
-Multiple namespaces: pid,net, mnt, uts, ipc, user
-
-
-
-#### Containers - Docker Architecture
-
-
-Which components/layers compose the Docker technology?
-
-1. Runtime - responsible for starting and stopping containers
-2. Daemon - implements the Docker API and takes care of managing images (including builds), authentication, security, networking, etc.
-3. Orchestrator
-
-
-
-What components are part of the Docker engine?
-
- - Docker daemon
- - containerd
- - runc
-
-
-
-What is the low-level runtime?
-
- - The low level runtime is called runc
- - It manages every container running on Docker host
- - Its purpose is to interact with the underlying OS to start and stop containers
- - Its reference implementation is of the OCI (Open Containers Initiative) container-runtime-spec
- - It's a small CLI wrapper for libcontainer
-
-
-
-What is the high-level runtime?
-
- - The high level runtime is called containerd
- - It was developed by Docker Inc and at some point donated to CNCF
- - It manages the whole lifecycle of a container - start, stop, remove and pause
- - It take care of setting up network interfaces, volume, pushing and pulling images, ...
- - It manages the lower level runtime (runc) instances
- - It's used both by Docker and Kubernetes as a container runtime
- - It sits between Docker daemon and runc at the OCI layer
-
-Note: running `ps -ef | grep -i containerd` on a system with Docker installed and running, you should see a process of containerd
-
-
-
-True or False? The docker daemon (dockerd) performs lower-level tasks compared to containerd
-
-False. The Docker daemon performs higher-level tasks compared to containerd.
-It's responsible for managing networks, volumes, images, ...
-
-
-
-Describe in detail what happens when you run `docker pull image:tag`?
-Docker CLI passes your request to Docker daemon. Dockerd Logs shows the process
-
-docker.io/library/busybox:latest resolved to a manifestList object with 9 entries; looking for a unknown/amd64 match
-
-found match for linux/amd64 with media type application/vnd.docker.distribution.manifest.v2+json, digest sha256:400ee2ed939df769d4681023810d2e4fb9479b8401d97003c710d0e20f7c49c6
-
-pulling blob \"sha256:61c5ed1cbdf8e801f3b73d906c61261ad916b2532d6756e7c4fbcacb975299fb Downloaded 61c5ed1cbdf8 to tempfile /var/lib/docker/tmp/GetImageBlob909736690
-
-Applying tar in /var/lib/docker/overlay2/507df36fe373108f19df4b22a07d10de7800f33c9613acb139827ba2645444f7/diff" storage-driver=overlay2
-
-Applied tar sha256:514c3a3e64d4ebf15f482c9e8909d130bcd53bcc452f0225b0a04744de7b8c43 to 507df36fe373108f19df4b22a07d10de7800f33c9613acb139827ba2645444f7, size: 1223534
-
-
-
-Describe in detail what happens when you run a container
-
-1. The Docker client converts the run command into an API payload
-2. It then POST the payload to the API endpoint exposed by the Docker daemon
-3. When the daemon receives the command to create a new container, it makes a call to containerd via gRPC
-4. containerd converts the required image into an OCI bundle and tells runc to use that bundle for creating the container
-5. runc interfaces with the OS kernel to pull together the different constructs (namespace, cgroups, etc.) used for creating the container
-6. Container process is started as a child-process of runc
-7. Once it starts, runc exists
-
-
-
-True or False? Killing the Docker daemon will kill all the running containers
-
-False. While this was true at some point, today the container runtime isn't part of the daemon (it's part of containerd and runc) so stopping or killing the daemon will not affect running containers.
-
-
-
-True or False? containerd forks a new instance runc for every container it creates
-
-True
-
-
-
-True or False? Running a dozen of containers will result in having a dozen of runc processes
-
-False. Once a container is created, the parent runc process exists.
-
-
-
-What is shim in regards to Docker?
-
-shim is the process that becomes the container's parent when runc process exists. It's responsible for:
-
- - Reporting exit code back to the Docker daemon
- - Making sure the container doesn't terminate if the daemon is being restarted. It does so by keeping the stdout and stdin open
-
-
-
-What `podman commit` does?. When will you use it?
-
-Create a new image from a container’s changes
-
-
-
-How would you transfer data from one container into another?
-
-
-
-What happens to data of the container when a container exists?
-
-
-
-Explain what each of the following commands do:
-
- * docker run
- * docker rm
- * docker ps
- * docker pull
- * docker build
- * docker commit
-
-
-
-
-How do you remove old, non running, containers?
-
-1. To remove one or more Docker images use the docker container rm command followed by the ID of the containers you want to remove.
-2. The docker system prune command will remove all stopped containers, all dangling images, and all unused networks
-3. docker rm $(docker ps -a -q) - This command will delete all stopped containers. The command docker ps -a -q will return all existing container IDs and pass them to the rm command which will delete them. Any running containers will not be deleted.
-
-
-
-How the Docker client communicates with the daemon?
-
-Via the local socket at `/var/run/docker.sock`
-
-
-
-Explain Docker interlock
-
-
-
-What is Docker Repository?
-
-
-
-Explain image layers
-
-A Docker image is built up from a series of layers. Each layer represents an instruction in the image’s Dockerfile. Each layer except the very last one is read-only.
-Each layer is only a set of differences from the layer before it. The layers are stacked on top of each other. When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.
-The major difference between a container and an image is the top writable layer. All writes to the container that add new or modify existing data are stored in this writable layer. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged.
-Because each container has its own writable container layer, and all changes are stored in this container layer, multiple containers can share access to the same underlying image and yet have their own data state.
-
-
-
-What best practices are you familiar related to working with containers?
-
-
-
-How do you manage persistent storage in Docker?
-
-
-
-How can you connect from the inside of your container to the localhost of your host, where the container runs?
-
-
-
-How do you copy files from Docker container to the host and vice versa?
-
-
-#### Containers - Docker Compose
-
-
-Explain what is Docker compose and what is it used for
-
-Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
-
-For example, you can use it to set up ELK stack where the services are: elasticsearch, logstash and kibana. Each running in its own container.
-In general, it's useful for running applications which composed out of several different services. It let's you manage it as one deployed app, instead of different multiple separate services.
-
-
-
-Describe the process of using Docker Compose
-
-* Define the services you would like to run together in a docker-compose.yml file
-* Run `docker-compose up` to run the services
-
-
-#### Containers - Docker Images
-
-
-What is Docker Hub?
-
-One of the most common registries for retrieving images.
-
-
-
-How to push an image to Docker Hub?
-
-`docker image push [username]/[image name]:[tag]`
-
-For example:
-
-`docker image mario/web_app:latest`
-
-
-
-What is the difference between Docker Hub and Docker cloud?
-
-Docker Hub is a native Docker registry service which allows you to run pull
-and push commands to install and deploy Docker images from the Docker Hub.
-
-Docker Cloud is built on top of the Docker Hub so Docker Cloud provides
-you with more options/features compared to Docker Hub. One example is
-Swarm management which means you can create new swarms in Docker Cloud.
-
-
-
-Explain Multi-stage builds
-
-Multi-stages builds allow you to produce smaller container images by splitting the build process into multiple stages.
-
-As an example, imagine you have one Dockerfile where you first build the application and then run it. The whole build process of the application might be using packages and libraries you don't really need for running the application later. Moreover, the build process might produce different artifacts which not all are needed for running the application.
-
-How do you deal with that? Sure, one option is to add more instructions to remove all the unnecessary stuff but, there are a couple of issues with this approach:
-1. You need to know what to remove exactly and that might be not as straightforward as you think
-2. You add new layers which are not really needed
-
-A better solution might be to use multi-stage builds where one stage (the build process) is passing the relevant artifacts/outputs to the stage that runs the application.
-
-
-
-True or False? In multi-stage builds, artifacts can be copied between stages
-
-True. This allows us to eventually produce smaller images.
-
-
-
-What .dockerignore
is used for?
-
-By default, Docker uses everything (all the files and directories) in the directory you use as build context.
-`.dockerignore` used for excluding files and directories from the build context
-
-
-#### Containers - Networking
-
-
-What container network standards or architectures are you familiar with?
-
-CNM (Container Network Model):
- * Requires distrubited key value store (like etcd for example) for storing the network configuration
- * Used by Docker
-CNI (Container Network Interface):
- * Network configuration should be in JSON format
-
-
-#### Containers - Docker Networking
-
-
-What network specification Docker is using and how its implementation is called?
-
-Docker is using the CNM (Container Network Model) design specification.
-The implementation of CNM specification by Docker is called "libnetwork". It's written in Go.
-
-
-
-Explain the following blocks in regards to CNM:
-
- * Networks
- * Endpoints
- * Sandboxes
-
- * Networks: software implementation of an switch. They used for grouping and isolating a collection of endpoints.
- * Endpoints: Virtual network interfaces. Used for making connections.
- * Sandboxes: Isolated network stack (interfaces, routing tables, ports, ...)
-
-
-
-True or False? If you would like to connect a container to multiple networks, you need multiple endpoints
-
-True. An endpoint can connect only to a single network.
-
-
-
-What are some features of libnetwork?
-
-* Native service discovery
-* ingress-based load balancer
-* network control plane and management plane
-
-
-#### Containers - Security
-
-
-What security best practices are there regarding containers?
-
- * Install only the necessary packages in the container
- * Don't run containers as root when possible
- * Don't mount the Docker daemon unix socket into any of the containers
- * Set volumes and container's filesystem to read only
- * DO NOT run containers with `--privilged` flag
-
-
-
-A container can cause a kernel panic and bring down the whole host. What preventive actions can you apply to avoid this specific situation?
-
- * Install only the necessary packages in the container
- * Set volumes and container's filesystem to read only
- * DO NOT run containers with `--privilged` flag
-
-
-#### Containers - Docker in Production
-
-
-What are some best practices you following in regards to using containers in production?
-
-Images:
- * Use images from official repositories
- * Include only the packages you are going to use. Nothing else.
- * Specify a tag in FROM instruction. Not using a tag means you'll always pull the latest, which changes over time and might result in unexpected result.
- * Do not use environment variables to share secrets
- * Keep images small! - you want them only to include what is required for the application to run successfully. Nothing else.
-Components:
- * Secured connection between components (e.g. client and server)
-
-
-
-True or False? It's recommended for production environments that Docker client and server will communicate over network using HTTP socket
-
-False. Communication between client and server shouldn't be done over HTTP since it's insecure. It's better to enforce the daemon to only accept network connection that are secured with TLS.
-Basically, the Docker daemon will only accept secured connections with certificates from trusted CA.
-
-
-
-What forms of self-healing options available for Docker containers?
-
-Restart Policies. It allows you to automatically restart containers after certain events.
-
-
-
-What restart policies are you familiar with?
-
- * always: restart the container when it's stopped (not with `docker container stop`)
- * unless-stopped: restart the container unless it was in stopped status
- * no: don't restart the container at any point (default policy)
- * on-failure: restart the container when it exists due to an error (= exit code different than zero)
-
-
-#### Containers - Docker Misc
-
-Explain what is Docker Bench
-
-
-## Kubernetes
-
-
-
-### Kubernetes Exercises
-
-#### Developer & "Regular" User Path
-
-|Name|Topic|Objective & Instructions|Solution|Comments|
-|--------|--------|------|----|----|
-| My First Pod | Pods | [Exercise](exercises/kubernetes/pods_01.md) | [Solution](exercises/kubernetes/solutions/pods_01_solution.md)
-| "Killing" Containers | Pods | [Exercise](exercises/kubernetes/killing_containers.md) | [Solution](exercises/kubernetes/solutions/killing_containers.md)
-| Creating a Service | Service | [Exercise](exercises/kubernetes/services_01.md) | [Solution](exercises/kubernetes/solutions/services_01_solution.md)
-| Creating a ReplicaSet | ReplicaSet | [Exercise](exercises/kubernetes/replicaset_01.md) | [Solution](exercises/kubernetes/solutions/replicaset_01_solution.md)
-| Operating ReplicaSets | ReplicaSet | [Exercise](exercises/kubernetes/replicaset_02.md) | [Solution](exercises/kubernetes/solutions/replicaset_02_solution.md)
-| ReplicaSets Selectors | ReplicaSet | [Exercise](exercises/kubernetes/replicaset_03.md) | [Solution](exercises/kubernetes/solutions/replicaset_03_solution.md)
-
-### Kubernetes Self Assesment
-
-
-What is Kubernetes? Why organizations are using it?
-
-Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
-
-To understand what Kubernetes is good for, let's look at some examples:
-
-* You would like to run a certain application in a container on multiple different locations. Sure, if it's 2-3 servers/locations, you can do it by yourself but it can be challenging to scale it up to additional multiple location.
-* Performing updates and changes across hundreds of containers
-* Handle cases where the current load requires to scale up (or down)
-
-
-
-When or why NOT to use Kubernetes?
-
- - If you are big team of engineers (e.g. 200) deploying applications using containers and you need to manage scaling, rolling out updates, etc. You probably want to use Kubernetes
-
- - If you manage low level infrastructure or baremetals, Kubernetes is probably not what you need or want
- - If you are a small team (e.g. 20-50 engineers) Kubernetes might be an overkill (even if you need scale, rolling out updates, etc.)
-
-
-
-What Kubernetes objects are there?
-
- * Pod
- * Service
- * ReplicationController
- * ReplicaSet
- * DaemonSet
- * Namespace
- * ConfigMap
- ...
-
-
-
-What fields are mandatory with any Kubernetes object?
-
-metadata, kind and apiVersion
-
-
-
-What actions or operations you consider as best practices when it comes to Kuberentes?
-
- - Always make sure Kubernetes YAML files are valid. Applying automated checks and pipelines is recommended.
- - Always specify requests and limits to prevent situation where containers are using the entire cluster memory which may lead to OOM issue
-
-
-
-What is kubectl?
-
-Kubectl is the Kubernetes command line tool that allows you to run commands against Kubernetes clusters. For example, you can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.
-
-
-
-What Kubernetes objects do you usually use when deploying applications in Kubernetes?
-
-* Deployment - creates and the Pods and watches them
-* Service: route traffic to Pods internally
-* Ingress: route traffic from outside the cluster
-
-
-#### Kubernetes - Cluster
-
-
-What is a Kubernetes Cluster?
-
-Red Hat Definition: "A Kubernetes cluster is a set of node machines for running containerized applications. If you’re running Kubernetes, you’re running a cluster.
-At a minimum, a cluster contains a worker node and a master node."
-
-Read more [here](https://www.redhat.com/en/topics/containers/what-is-a-kubernetes-cluster)
-
-
-
-What is a Node?
-
-A node is a virtual or a physical machine that serves as a worker for running the applications.
-It's recommended to have at least 3 nodes in a production environment.
-
-
-
-What the master node is responsible for?
-
-The master coordinates all the workflows in the cluster:
-
-* Scheduling applications
-* Managing desired state
-* Rolling out new updates
-
-
-
-Which command will list the nodes of the cluster?
-
-`kubectl get nodes`
-
-
-
-True or False? Every cluster must have 0 or more master nodes and at least 1 worker
-
-False. A Kubernetes cluster consists of at least 1 master and can have 0 workers (although that wouldn't be very useful...)
-
-
-
-What are the components of the master node?
-
- * API Server - the Kubernetes API. All cluster components communicate through it
- * Scheduler - assigns an application with a worker node it can run on
- * Controller Manager - cluster maintenance (replications, node failures, etc.)
- * etcd - stores cluster configuration
-
-
-
-What are the components of a worker node?
-
- * Kubelet - an agent responsible for node communication with the master.
- * Kube-proxy - load balancing traffic between app components
- * Container runtime - the engine runs the containers (Podman, Docker, ...)
-
-
-
-Place the components on the right side of the image in the right place in the drawing
-
-
-
-
-
-
-You are managing multiple Kubernetes clusters. How do you quickly change between the clusters using kubectl?
-
-`kubectl config use-context`
-
-
-
-How do you prevent high memory usage in your Kubernetes cluster and possibly issues like memory leak and OOM?
-
-Apply requests and limits, especially on third party applications (where the uncertainty is even bigger)
-
-
-
-Do you have experience with deploying a Kubernetes cluster? If so, can you describe the process in high-level?
-
-1. Create multiple instances you will use as Kubernetes nodes/workers. Create also an instance to act as the Master. The instances can be provisioned in a cloud or they can be virtual machines on bare metal hosts.
-2. Provision a certificate authority that will be used to generate TLS certificates for the different components of a Kubernetes cluster (kubelet, etcd, ...)
- 1. Generate a certificate and private key for the different components
-3. Generate kubeconfigs so the different clients of Kubernetes can locate the API servers and authenticate.
-4. Generate encryption key that will be used for encrypting the cluster data
-5. Create an etcd cluster
-
-
-#### Kubernetes - Pods
-
-
-Explain what is a Pod
-
-A Pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.
-Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
-
-
-
-Deploy a pod called "my-pod" using the nginx:alpine image
-
-`kubectl run my-pod --image=nginx:alpine --restart=Never`
-
-If you are a Kubernetes beginner you should know that this is not a common way to run Pods. The common way is to run a Deployment which in turn runs Pod(s).
-In addition, Pods and/or Deployments are usually defined in files rather than executed directly using only the CLI arguments.
-
-
-
-What are your thoughts on "Pods are not meant to be created directly"?
-
-Pods are usually indeed not created directly. You'll notice that Pods are usually created as part of another entities such as Deployments or ReplicaSets.
-If a Pod dies, Kubernetes will not bring it back. This is why it's more useful for example to define ReplicaSets that will make sure that a given number of Pods will always run, even after a certain Pod dies.
-
-
-
-How many containers can a pod contain?
-
-A pod can include multiple containers but in most cases it would probably be one container per pod.
-
-
-
-What use cases exist for running multiple containers in a single pod?
-
-A web application with separate (= in their own containers) logging and monitoring components/adapters is one examples.
-A CI/CD pipeline (using Tekton for example) can run multiple containers in one Pod if a Task contains multiple commands.
-
-
-
-What are the possible Pod phases?
-
- * Running - The Pod bound to a node and at least one container is running
- * Failed - At least one container in the Pod terminated with a failure
- * Succeeded - Every container in the Pod terminated with success
- * Unknown - Pod's state could not be obtained
- * Pending - Containers are not yet running (Perhaps images are still being downloaded or the pod wasn't scheduled yet)
-
-
-
-True or False? By default, pods are isolated. This means they are unable to receive traffic from any source
-
-False. By default, pods are non-isolated = pods accept traffic from any source.
-
-
-
-True or False? The "Pending" phase means the Pod was not yet accepted by the Kubernetes cluster so the scheduler can't run it unless it's accepted
-
-False. "Pending" is after the Pod was accepted by the cluster, but the container can't run for different reasons like images not yet downloaded.
-
-
-
-How to list the pods in the current namespace?
-
-`kubectl get po`
-
-
-
-How view all the pods running in all the namespaces?
-
-`kubectl get pods --all-namespaces`
-
-
-
-True or False? A single Pod can be split across multiple nodes
-
-False. A single Pod can run on a single node.
-
-
-
-How to delete a pod?
-
-`kubectl delete pod pod_name`
-
-
-
-How to find out on which node a certain pod is running?
-
-`kubectl get po -o wide`
-
-
-
-What are "Static Pods"?
-
-* Managed directly by Kubelet on specific node
-* API server is not observing static Pods
-* For each static Pod there is a mirror Pod on kubernetes API server but it can't be managed from there
-
-Read more about it [here](https://kubernetes.io/docs/tasks/configure-pod-container/static-pod)
-
-
-
-True or False? A volume defined in Pod can be accessed by all the containers of that Pod
-
-True.
-
-
-
-What happens when you run a Pod?
-
-1. Kubectl sends a request to the API server to create the Pod
-2. The Scheduler detects that there is an unassigned Pod (by monitoring the API server)
-3. The Scheduler chooses a node to assign the Pod to
-4. The Scheduler updates the API server about which node it chose
-5. Kubelet (which also monitors the API server) notices there is a Pod assigned to the same node on which it runs and that Pod isn't running
-6. Kubelet sends request to the container engine (e.g. Docker) to create and run the containers
-7. An update is sent by Kubelet to the API server (notifying it that the Pod is running)
-
-
-
-How to confirm a container is running after running the command kubectl run web --image nginxinc/nginx-unprivileged
-
-* When you run `kubectl describe pods ` it will tell whether the container is running:
-`Status: Running`
-* Run a command inside the container: `kubectl exec web -- ls`
-
-
-
-After running kubectl run database --image mongo
you see the status is "CrashLoopBackOff". What could possibly went wrong and what do you do to confirm?
-
-"CrashLoopBackOff" means the Pod is starting, crashing, starting...and so it repeats itself.
-There are many different reasons to get this error - lack of permissions, init-container misconfiguration, persistent volume connection issue, etc.
-
-One of the ways to check why it happened it to run `kubectl describe po ` and having a look at the exit code
-
-```
- Last State: Terminated
- Reason: Error
- Exit Code: 100
-```
-
-Another way to check what's going on, is to run `kubectl logs `. This will provide us with the logs from the containers running in that Pod.
-
-
-
-Explain the purpose of the following lines
-
-```
-livenessProbe:
- exec:
- command:
- - cat
- - /appStatus
- initialDelaySeconds: 10
- periodSeconds: 5
-```
-
-
-These lines make use of `liveness probe`. It's used to restart a container when it reaches a non-desired state.
-In this case, if the command `cat /appStatus` fails, Kubernetes will kill the container and will apply the restart policy. The `initialDelaySeconds: 10` means that Kubelet will wait 10 seconds before running the command/probe for the first time. From that point on, it will run it every 5 seconds, as defined with `periodSeconds`
-
-
-
-Explain the purpose of the following lines
-
-```
-readinessProbe:
- tcpSocket:
- port: 2017
- initialDelaySeconds: 15
- periodSeconds: 20
-```
-
-
-They define a readiness probe where the Pod will not be marked as "Ready" before it will be possible to connect to port 2017 of the container. The first check/probe will start after 15 seconds from the moment the container started to run and will continue to run the check/probe every 20 seconds until it will manage to connect to the defined port.
-
-
-
-What does the "ErrImagePull" status of a Pod means?
-
-It wasn't able to pull the image specified for running the container(s). This can happen if the client didn't authenticated for example.
-More details can be obtained with `kubectl describe po `.
-
-
-
-What happens when you delete a Pod?
-
-1. The `TERM` signal is sent to kill the main processes inside the containers of the given Pod
-2. Each container is given a period of 30 seconds to shut down the processes gracefully
-3. If the grace period expires, the `KILL` signal is used to kill the processes forcefully and the containers as well
-
-
-
-Explain liveness probes
-
-Liveness probes is a useful mechanism used for restarting the container when a certain check/probe, the user has defined, fails.
-For example, the user can define that the command `cat /app/status` will run every X seconds and the moment this command fails, the container will be restarted.
-
-You can read more about it in [kubernetes.io](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes)
-
-
-
-Explain readiness probes
-
-readiness probes used by Kubelet to know when a container is ready to start running, accepting traffic.
-For example, a readiness probe can be to connect port 8080 on a container. Once Kubelet manages to connect it, the Pod is marked as ready
-
-You can read more about it in [kubernetes.io](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes)
-
-
-
-How readiness probe status affect Services when they are combined?
-
-Only containers whose state set to Success will be able to receive requests sent to the Service.
-
-
-
-Why it's usually considered better to include one container per Pod?
-
-One reason is that it makes it harder to scale, when you need to scale only one of the containers in a given Pod.
-
-
-#### Kubernetes - Deployments
-
-
-What is a "Deployment" in Kubernetes?
-
-A Kubernetes Deployment is used to tell Kubernetes how to create or modify instances of the pods that hold a containerized application.
-Deployments can scale the number of replica pods, enable rollout of updated code in a controlled manner, or roll back to an earlier deployment version if necessary.
-
-A Deployment is a declarative statement for the desired state for Pods and Replica Sets.
-
-
-
-How to create a deployment?
-
-```
-cat << EOF | kubectl create -f -
-apiVersion: v1
-kind: Pod
-metadata:
- name: nginx
-spec:
- containers:
- - name: nginx
- image: nginx
-EOF
-```
-
-
-
-How to edit a deployment?
-
-kubectl edit deployment some-deployment
-
-
-
-What happens after you edit a deployment and change the image?
-
-The pod will terminate and another, new pod, will be created.
-
-Also, when looking at the replicaset, you'll see the old replica doesn't have any pods and a new replicaset is created.
-
-
-
-How to delete a deployment?
-
-One way is by specifying the deployment name: `kubectl delete deployment [deployment_name]`
-Another way is using the deployment configuration file: `kubectl delete -f deployment.yaml`
-
-
-
-What happens when you delete a deployment?
-
-The pod related to the deployment will terminate and the replicaset will be removed.
-
-
-
-How make an app accessible on private or external network?
-
-Using a Service.
-
-
-
-An internal load balancer in Kubernetes is called ____
and an external load balancer is called ____
-
-An internal load balancer in Kubernetes is called Service and an external load balancer is Ingress
-
-
-#### Kubernetes - Services
-
-
-What is a Service in Kubernetes?
-
-"An abstract way to expose an application running on a set of Pods as a network service." - read more [here](https://kubernetes.io/docs/concepts/services-networking/service)
-In simpler words, it allows you to add an internal or external connectivity to a certain application running in a container.
-
-
-
-True or False? The lifecycle of Pods and Services isn't connected so when a Pod dies, the Service still stays
-
-True
-
-
-
-What Service types are there?
-
-* ClusterIP
-* NodePort
-* LoadBalancer
-* ExternalName
-
-More on this topic [here](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types)
-
-
-
-How Service and Deployment are connected?
-
-The truth is they aren't connected. Service points to Pod(s) directly, without connecting to the Deployment in any way.
-
-
-
-What are important steps in defining/adding a Service?
-
-1. Making sure that targetPort of the Service is matching the containerPort of the POd
-2. Making sure that selector matches at least one of the Pod's labels
-
-
-
-What is the default service type in Kubernetes and what is it used for?
-
-The default is ClusterIP and it's used for exposing a port internally. It's useful when you want to enable internal communication between Pods and prevent any external access.
-
-
-
-How to get information on a certain service?
-
-`kubctl describe service `
-
-It's more common to use `kubectl describe svc ...`
-
-
-
-What the following command does?
-
-```
-kubectl expose rs some-replicaset --name=replicaset-svc --target-port=2017 --type=NodePort
-```
-
-
-It exposes a ReplicaSet by creating a service called 'replicaset-svc'. The exposed port is 2017 (this is the port used by the application) and the service type is NodePort which means it will be reachable externally.
-
-
-
-True or False? the target port, in the case of running the following command, will be exposed only on one of the Kubernetes cluster nodes but it will routed to all the pods
-
-```
-kubectl expose rs some-replicaset --name=replicaset-svc --target-port=2017 --type=NodePort
-```
-
-
-False. It will be exposed on every node of the cluster and will be routed to one of the Pods (which belong to the ReplicaSet)
-
-
-
-How to verify that a certain service configured to forward the requests to a given pod
-
-Run `kubectl describe service` and see if the IPs from "Endpoints" match any IPs from the output of `kubectl get pod -o wide`
-
-
-
-Explain what will happen when running apply on the following block
-
-```
-apiVersion: v1
-kind: Service
-metadata:
- name: some-app
-spec:
- type: NodePort
- ports:
- - port: 8080
- nodePort: 2017
- protocol: TCP
- selector:
- type: backend
- service: some-app
-```
-
-
-It creates a new Service of the type "NodePort" which means it can be used for internal and external communication with the app.
-The port of the application is 8080 and the requests will forwarded to this port. The exposed port is 2017. As a note, this is not a common practice, to specify the nodePort.
-The port used TCP (instead of UDP) and this is also the default so you don't have to specify it.
-The selector used by the Service to know to which Pods to forward the requests. In this case, Pods with the label "type: backend" and "service: some-app".
-
-
-
-How to turn the following service into an external one?
-
-```
-spec:
- selector:
- app: some-app
- ports:
- - protocol: TCP
- port: 8081
- targetPort: 8081
-```
-
-
-Adding `type: LoadBalancer` and `nodePort`
-
-```
-spec:
- selector:
- app: some-app
- type: LoadBalancer
- ports:
- - protocol: TCP
- port: 8081
- targetPort: 8081
- nodePort: 32412
-```
-
-
-
-What would you use to route traffic from outside the Kubernetes cluster to services within a cluster?
-
-Ingress
-
-
-
-True or False? When "NodePort" is used, "ClusterIP" will be created automatically?
-
-True
-
-
-
-When would you use the "LoadBalancer" type
-
-Mostly when you would like to combine it with cloud provider's load balancer
-
-
-
-How would you map a service to an external address?
-
-Using the 'ExternalName' directive.
-
-
-
-Describe in detail what happens when you create a service
-
-1. Kubectl sends a request to the API server to create a Service
-2. The controller detects there is a new Service
-3. Endpoint objects created with the same name as the service, by the controller
-4. The controller is using the Service selector to identify the endpoints
-5. kube-proxy detects there is a new endpoint object + new service and adds iptables rules to capture traffic to the Service port and redirect it to endpoints
-6. kube-dns detects there is a new Service and adds the container record to the dns server
-
-
-
-How to list the endpoints of a certain app?
-
-`kubectl get ep `
-
-
-
-How can you find out information on a Service related to a certain Pod if all you can use is kubectl exec --
-
-You can run `kubectl exec -- env` which will give you a couple environment variables related to the Service.
-Variables such as `[SERVICE_NAME]_SERVICE_HOST`, `[SERVICE_NAME]_SERVICE_PORT`, ...
-
-
-
-Describe what happens when a container tries to connect with its corresponding Service for the first time. Explain who added each of the components you include in your description
-
- - The container looks at the nameserver defined in /etc/resolv.conf
- - The container queries the nameserver so the address is resolved to the Service IP
- - Requests sent to the Service IP are forwarded with iptables rules (or other chosen software) to the endpoint(s).
-
-Explanation as to who added them:
-
- - The nameserver in the container is added by kubelet during the scheduling of the Pod, by using kube-dns
- - The DNS record of the service is added by kube-dns during the Service creation
- - iptables rules are added by kube-proxy during Endpoint and Service creation
-
-
-#### Kubernetes - Ingress
-
-
-What is Ingress?
-
-From Kubernetes docs: "Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource."
-
-Read more [here](https://kubernetes.io/docs/concepts/services-networking/ingress/)
-
-
-
-Complete the following configuration file to make it Ingress
-
-```
-metadata:
- name: someapp-ingress
-spec:
-```
-
-There are several ways to answer this question.
-
-```
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
- name: someapp-ingress
-spec:
- rules:
- - host: my.host
- http:
- paths:
- - backend:
- serviceName: someapp-internal-service
- servicePort: 8080
-```
-
-
-
-
-Explain the meaning of "http", "host" and "backend" directives
-
-```
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
- name: someapp-ingress
-spec:
- rules:
- - host: my.host
- http:
- paths:
- - backend:
- serviceName: someapp-internal-service
- servicePort: 8080
-```
-
-
-host is the entry point of the cluster so basically a valid domain address that maps to cluster's node IP address
-the http line used for specifying that incoming requests will be forwarded to the internal service using http.
-backend is referencing the internal service (serviceName is the name under metadata and servicePort is the port under the ports section).
-
-
-
-Why using a wildcard in ingress host may lead to issues?
-
-The reason you should not wildcard value in a host (like `- host: *`) is because you basically tell your Kubernetes cluster to forward all the traffic to the container where you used this ingress. This may cause the entire cluster to go down.
-
-
-
-What is Ingress Controller?
-
-An implementation for Ingress. It's basically another pod (or set of pods) that does evaluates and processes Ingress rules and this it manages all the redirections.
-
-There are multiple Ingress Controller implementations (the one from Kubernetes is Kubernetes Nginx Ingress Controller).
-
-
-
-What are some use cases for using Ingress?
-
-* Multiple sub-domains (multiple host entries, each with its own service)
-* One domain with multiple services (multiple paths where each one is mapped to a different service/application)
-
-
-
-How to list Ingress in your namespace?
-
-kubectl get ingress
-
-
-
-What is Ingress Default Backend?
-
-It specifies what do with an incoming request to the Kubernetes cluster that isn't mapped to any backend (= no rule to for mapping the request to a service). If the default backend service isn't defined, it's recommended to define so users still see some kind of message instead of nothing or unclear error.
-
-
-
-How to configure a default backend?
-
-Create Service resource that specifies the name of the default backend as reflected in `kubectl desrcibe ingress ...` and the port under the ports section.
-
-
-
-How to configure TLS with Ingress?
-
-Add tls and secretName entries.
-
-```
-spec:
- tls:
- - hosts:
- - some_app.com
- secretName: someapp-secret-tls
-````
-
-
-
-True or False? When configuring Ingress with TLS, the Secret component must be in the same namespace as the Ingress component
-
-True
-
-
-
-Which Kubernetes concept would you use to control traffic flow at the IP address or port level?
-
-Network Policies
-
-
-
-What the following block of lines does?
-
-```
-spec:
- replicas: 2
- selector:
- matchLabels:
- type: backend
- template:
- metadata:
- labels:
- type: backend
- spec:
- containers:
- - name: httpd-yup
- image: httpd
-```
-
-
-It defines a replicaset for Pods whose type is set to "backend" so at any given point of time there will be 2 concurrent Pods running.
-
-
-#### Kubernetes - ReplicaSets
-
-
-What is the purpose of ReplicaSet?
-
-[kubernetes.io](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset): "A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods."
-
-In simpler words, a ReplicaSet will ensure the specified number of Pods replicas is running for a selected Pod. If there are more Pods than defined in the ReplicaSet, some will be removed. If there are less than what is defined in the ReplicaSet then, then more replicas will be added.
-
-
-
-What will happen when a Pod, created by ReplicaSet, is deleted directly with kubectl delete po ...
?
-
-The ReplicaSet will create a new Pod in order to reach the desired number of replicas.
-
-
-
-True or False? If a ReplicaSet defines 2 replicas but there 3 Pods running matching the ReplicaSet selector, it will do nothing
-
-False. It will terminate one of the Pods to reach the desired state of 2 replicas.
-
-
-
-Describe the sequence of events in case of creating a ReplicaSet
-
-* The client (e.g. kubectl) sends a request to the API server to create a ReplicaSet
-* The Controller detects there is a new event requesting for a ReplicaSet
-* The controller creates new Pod definitions (the exact number depends on what is defined in the ReplicaSet definition)
-* The scheduler detects unassigned Pods and decides to which nodes to assign the Pods. This information sent to the API server
-* Kubelet detects that two Pods were assigned to the node it's running on (as it constantly watching the API server)
-* Kubelet sends requests to the container engine, to create the containers that are part of the Pod
-* Kubelet sends a request to the API server to notify it the Pods were created
-
-
-
-How to list ReplicaSets in the current namespace?
-
-kubectl get rs
-
-
-
-Is it possible to delete ReplicaSet without deleting the Pods it created?
-
-Yes, with `--cascase=false`.
-
-`kubectl delete -f rs.yaml --cascade=false`
-
-
-
-What is the default number of replicas if not explicitly specified?
-
-1
-
-
-
-What the following output of kubectl get rs
means?
-
-NAME DESIRED CURRENT READY AGE
-web 2 2 0 2m23s
-
-
-The replicaset `web` has 2 replicas. It seems that the containers inside the Pod(s) are not yet running since the value of READY is 0. It might be normal since it takes time for some containers to start running and it might be due to an error. Running `kubectl describe po POD_NAME` or `kubectl logs POD_NAME` can give us more information.
-
-
-
-You run kubectl get rs
and while DESIRED is set to 2, you see that READY is set to 0. What are some possible reasons for it to be 0?
-
- * Images are still being pulled
- * There is an error and the containers can't reach the state "Running"
-
-
-
-True or False? Pods specified by the selector field of ReplicaSet must be created by the ReplicaSet itself
-
-False. The Pods can be already running and initially they can be created by any object. It doesn't matter for the ReplicaSet and not a requirement for it to acquire and monitor them.
-
-
-
-True or False? In case of a ReplicaSet, if Pods specified in the selector field don't exists, the ReplicaSet will wait for them to run before doing anything
-
-False. It will take care of running the missing Pods.
-
-
-
-In case of a ReplicaSet, Which field is mandatory in the spec section?
-
-The field `template` in spec section is mandatory. It's used by the ReplicaSet to create new Pods when needed.
-
-
-
-You've created a ReplicaSet, how to check whether the ReplicaSet found matching Pods or it created new Pods?
-
-`kubectl describe rs `
-
-It will be visible under `Events` (the very last lines)
-
-
-
-True or False? Deleting a ReplicaSet will delete the Pods it created
-
-True (and not only the Pods but anything else it created).
-
-
-
-True or False? Removing the label from a Pod that is used by ReplicaSet to match Pods, will cause the ReplicaSet to create a new Pod
-
-True. When the label, used by a ReplicaSet in the selector field, removed from a Pod, that Pod no longer controlled by the ReplicaSet and the ReplicaSet will create a new Pod to compensate for the one it "lost".
-
-
-
-How to scale a deployment to 8 replicas?
-
-kubectl scale deploy --replicas=8
-
-
-
-ReplicaSets are running the moment the user executed the command to create them (like kubectl create -f rs.yaml
)
-
-False. It can take some time, depends on what exactly you are running. To see if they are up and running, run `kubectl get rs` and watch the 'READY' column.
-
-
-
-How to expose a ReplicaSet as a new service?
-
-`kubectl expose rs --name= --target-port= --type=NodePort`
-
-Few notes:
- - the target port depends on which port the app is using in the container
- - type can be different and doesn't has to be specifically "NodePort"
-
-
-#### Kubernetes - Storage
-
-
-What is a volume in regards to Kubernetes?
-
-A directory accessible by the containers inside a certain Pod. The mechanism responsible for creating the directory and managing it, ... is mainly depends on the volume type.
-
-
-
-Which problems volumes in Kubernetes solve?
-
-1. Sharing files between containers running in the same Pod
-2. Storage in containers is ephemeral - it usually doesn't last for long. For example, when a container crashes, you lose all on-disk data.
-
-
-
-Explain ephemeral volume types vs. persistent volumes in regards to Pods
-
-Ephemeral volume types have the lifetime of a pod as opposed to persistent volumes which exist beyond the lifetime of a Pod.
-
-
-#### Kubernetes - Network Policies
-
-
-Explain Network Policies
-
-[kubernetes.io](https://kubernetes.io/docs/concepts/services-networking/network-policies): "NetworkPolicies are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network "entities"..."
-
-In simpler words, Network Policies specify how pods are allowed/disallowed to communicate with each other and/or other network endpoints.
-
-
-
-What are some use cases for using Network Policies?
-
- - Security: You want to prevent from everyone to communicate with a certain pod for security reasons
- - Controling network traffic: You would like to deny network flow between two specific nodes
-
-
-
-True or False? If no network policies are applied to a pod, then no connections to or from it are allowed
-
-False. By default pods are non-isolated.
-
-
-
-In case of two pods, if there is an egress policy on the source denining traffic and ingress policy on the destination that allows traffic then, traffic will be allowed or denied?
-
-Denied. Both source and destination policies has to allow traffic for it to be allowed.
-
-
-#### Kubernetes - Configuration File
-
-
-Which parts a configuration file has?
-
-It has three main parts:
-1. Metadata
-2. Specification
-3. Status (this automatically generated and added by Kubernetes)
-
-
-
-What is the format of a configuration file?
-
-YAML
-
-
-
-How to get latest configuration of a deployment?
-
-`kubectl get deployment [deployment_name] -o yaml`
-
-
-
-Where Kubernetes cluster stores the cluster state?
-
-etcd
-
-
-#### Kubernetes - etcd
-
-
-What is etcd?
-
-
-
-True or False? Etcd holds the current status of any kubernetes component
-
-True
-
-
-
-True or False? The API server is the only component which communicates directly with etcd
-
-True
-
-
-
-True or False? application data is not stored in etcd
-
-True
-
-
-
-Why etcd? Why not some SQL or NoSQL database?
-
-
-#### Kubernetes - Namespaces
-
-
-What are namespaces?
-
-Namespaces allow you split your cluster into virtual clusters where you can group your applications in a way that makes sense and is completely separated from the other groups (so you can for example create an app with the same name in two different namespaces)
-
-
-
-Why to use namespaces? What is the problem with using one default namespace?
-
-When using the default namespace alone, it becomes hard over time to get an overview of all the applications you manage in your cluster. Namespaces make it easier to organize the applications into groups that makes sense, like a namespace of all the monitoring applications and a namespace for all the security applications, etc.
-
-Namespaces can also be useful for managing Blue/Green environments where each namespace can include a different version of an app and also share resources that are in other namespaces (namespaces like logging, monitoring, etc.).
-
-Another use case for namespaces is one cluster, multiple teams. When multiple teams use the same cluster, they might end up stepping on each others toes. For example if they end up creating an app with the same name it means one of the teams overriden the app of the other team because there can't be too apps in Kubernetes with the same name (in the same namespace).
-
-
-
-True or False? When a namespace is deleted all resources in that namespace are not deleted but moved to another default namespace
-
-False. When a namespace is deleted, the resources in that namespace are deleted as well.
-
-
-
-What special namespaces are there by default when creating a Kubernetes cluster?
-
-* default
-* kube-system
-* kube-public
-* kube-node-lease
-
-
-
-What can you find in kube-system namespace?
-
-* Master and Kubectl processes
-* System processes
-
-
-
-How to list all namespaces?
-
-`kubectl get namespaces`
-
-
-
-What kube-public contains?
-
-* A configmap, which contains cluster information
-* Publicely accessible data
-
-
-
-How to get the name of the current namespace?
-
-kubectl config view | grep namespace
-
-
-
-What kube-node-lease contains?
-
-It holds information on hearbeats of nodes. Each node gets an object which holds information about its availability.
-
-
-
-How to create a namespace?
-
-One way is by running `kubectl create namespace [NAMESPACE_NAME]`
-
-Another way is by using namespace configuration file:
-```
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: some-cofngimap
- namespace: some-namespace
-```
-
-
-
-What default namespace contains?
-
-Any resource you create while using Kubernetes.
-
-
-
-True or False? With namespaces you can limit the resources consumed by the users/teams
-
-True. With namespaces you can limit CPU, RAM and storage usage.
-
-
-
-How to switch to another namespace? In other words how to change active namespace?
-
-`kubectl config set-context --current --namespace=some-namespace` and validate with `kubectl config view --minify | grep namespace:`
-
-OR
-
-`kubens some-namespace`
-
-
-
-What is Resource Quota?
-
-
-
-How to create a Resource Quota?
-
-kubectl create quota some-quota --hard-cpu=2,pods=2
-
-
-
-Which resources are accessible from different namespaces?
-
-Service.
-
-
-
-Let's say you have three namespaces: x, y and z. In x namespace you have a ConfigMap referencing service in z namespace. Can you reference the ConfigMap in x namespace from y namespace?
-
-No, you would have to create separate namespace in y namespace.
-
-
-
-Which service and in which namespace the following file is referencing?
-
-```
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: some-configmap
-data:
- some_url: samurai.jack
-```
-
-
-It's referencing the service "samurai" in the namespace called "jack".
-
-
-
-Which components can't be created within a namespace?
-
-Volume and Node.
-
-
-
-How to list all the components that bound to a namespace?
-
-`kubectl api-resources --namespaced=true`
-
-
-
-How to create components in a namespace?
-
-One way is by specifying --namespace like this: `kubectl apply -f my_component.yaml --namespace=some-namespace`
-Another way is by specifying it in the YAML itself:
-
-```
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: some-configmap
- namespace: some-namespace
-```
-
-and you can verify with: `kubectl get configmap -n some-namespace`
-
-
-
-How to execute the command "ls" in an existing pod?
-
-kubectl exec some-pod -it -- ls
-
-
-
-How to create a service that exposes a deployment?
-
-kubectl expose deploy some-deployment --port=80 --target-port=8080
-
-
-
-How to create a pod and a service with one command?
-
-kubectl run nginx --image=nginx --restart=Never --port 80 --expose
-
-
-
-Describe in detail what the following command does kubectl create deployment kubernetes-httpd --image=httpd
-
-
-
-Why to create kind deployment, if pods can be launched with replicaset?
-
-
-
-How to get list of resources which are not in a namespace?
-
-kubectl api-resources --namespaced=false
-
-
-
-How to delete all pods whose status is not "Running"?
-
-kubectl delete pods --field-selector=status.phase!='Running'
-
-
-
-What kubectl logs [pod-name]
command does?
-
-
-
-What kubectl describe pod [pod name] does?
command does?
-
-
-
-How to display the resources usages of pods?
-
-kubectl top pod
-
-
-
-What kubectl get componentstatus
does?
-
-Outputs the status of each of the control plane components.
-
-
-
-What is Minikube?
-
-Minikube is a lightweight Kubernetes implementation. It create a local virtual machine and deploys a simple (single node) cluster.
-
-
-
-How do you monitor your Kubernetes?
-
-
-
-You suspect one of the pods is having issues, what do you do?
-
-Start by inspecting the pods status. we can use the command `kubectl get pods` (--all-namespaces for pods in system namespace)
-
-If we see "Error" status, we can keep debugging by running the command `kubectl describe pod [name]`. In case we still don't see anything useful we can try stern for log tailing.
-
-In case we find out there was a temporary issue with the pod or the system, we can try restarting the pod with the following `kubectl scale deployment [name] --replicas=0`
-
-Setting the replicas to 0 will shut down the process. Now start it with `kubectl scale deployment [name] --replicas=1`
-
-
-
-What the Kubernetes Scheduler does?
-
-
-
-What happens to running pods if if you stop Kubelet on the worker nodes?
-
-
-
-What happens what pods are using too much memory? (more than its limit)
-
-They become candidates to for termination.
-
-
-
-Describe how roll-back works
-
-
-
-True or False? Memory is a compressible resource, meaning that when a container reach the memory limit, it will keep running
-
-False. CPU is a compressible resource while memory is a non compressible resource - once a container reached the memory limit, it will be terminated.
-
-
-
-What is the control loop? How it works?
-
-Explained [here](https://www.youtube.com/watch?v=i9V4oCa5f9I)
-
-
-#### Kubernetes - Operators
-
-
-What is an Operator?
-
-Explained [here](https://kubernetes.io/docs/concepts/extend-kubernetes/operator)
-
-"Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. Operators follow Kubernetes principles, notably the control loop."
-
-
-
-Why do we need Operators?
-
-The process of managing stateful applications in Kubernetes isn't as straightforward as managing stateless applications where reaching the desired status and upgrades are both handled the same way for every replica. In stateful applications, upgrading each replica might require different handling due to the stateful nature of the app, each replica might be in a different status. As a result, we often need a human operator to manage stateful applications. Kubernetes Operator is suppose to assist with this.
-
-This also help with automating a standard process on multiple Kubernetes clusters
-
-
-
-What components the Operator consists of?
-
-1. CRD (custom resource definition)
-2. Controller - Custom control loop which runs against the CRD
-
-
-
-How Operator works?
-
-It uses the control loop used by Kubernetes in general. It watches for changes in the application state. The difference is that is uses a custom control loop.
-In additions.
-
-In addition, it also makes use of CRD's (Custom Resources Definitions) so basically it extends Kubernetes API.
-
-
-
-True or False? Kubernetes Operator used for stateful applications
-
-True
-
-
-
-What is the Operator Framework?
-
-open source toolkit used to manage k8s native applications, called operators, in an automated and efficient way.
-
-
-
-What components the Operator Framework consists of?
-
-1. Operator SDK - allows developers to build operators
-2. Operator Lifecycle Manager - helps to install, update and generally manage the lifecycle of all operators
-3. Operator Metering - Enables usage reporting for operators that provide specialized services
-
-
-
-Describe in detail what is the Operator Lifecycle Manager
-
-It's part of the Operator Framework, used for managing the lifecycle of operators. It basically extends Kubernetes so a user can use a declarative way to manage operators (installation, upgrade, ...).
-
-
-
-What openshift-operator-lifecycle-manager namespace includes?
-
-It includes:
-
- * catalog-operator - Resolving and installing ClusterServiceVersions the resource they specify.
- * olm-operator - Deploys applications defined by ClusterServiceVersion resource
-
-
-
-What is kubconfig? What do you use it for?
-
-
-
-Can you use a Deployment for stateful applications?
-
-
-
-Explain StatefulSet
-
-
-#### Kubernetes - Secrets
-
-
-Explain Kubernetes Secrets
-
-Secrets let you store and manage sensitive information (passwords, ssh keys, etc.)
-
-
-
-How to create a Secret from a key and value?
-
-kubectl create secret generic some-secret --from-literal=password='donttellmypassword'
-
-
-
-How to create a Secret from a file?
-
-kubectl create secret generic some-secret --from-file=/some/file.txt
-
-
-
-What type: Opaque
in a secret file means? What other types are there?
-
-Opaque is the default type used for key-value pairs.
-
-
-
-True or False? storing data in a Secret component makes it automatically secured
-
-False. Some known security mechanisms like "encryption" aren't enabled by default.
-
-
-
-What is the problem with the following Secret file:
-
-```
-apiVersion: v1
-kind: Secret
-metadata:
- name: some-secret
-type: Opaque
-data:
- password: mySecretPassword
-```
-
-Password isn't encrypted.
-You should run something like this: `echo -n 'mySecretPassword' | base64` and paste the result to the file instead of using plain-text.
-
-
-
-How to create a Secret from a configuration file?
-
-`kubectl apply -f some-secret.yaml`
-
-
-
-What the following in Deployment configuration file means?
-
-```
-spec:
- containers:
- - name: USER_PASSWORD
- valueFrom:
- secretKeyRef:
- name: some-secret
- key: password
-```
-
-USER_PASSWORD environment variable will store the value from password key in the secret called "some-secret"
-In other words, you reference a value from a Kubernetes Secret.
-
-
-#### Kubernetes - Volumes
-
-
-True or False? Kubernetes provides data persistence out of the box, so when you restart a pod, data is saved
-
-False
-
-
-
-Explain "Persistent Volumes". Why do we need it?
-
-Persistent Volumes allow us to save data so basically they provide storage that doesn't depend on the pod lifecycle.
-
-
-
-True or False? Persistent Volume must be available to all nodes because the pod can restart on any of them
-
-True
-
-
-
-What types of persistent volumes are there?
-
-* NFS
-* iSCSI
-* CephFS
-* ...
-
-
-
-What is PersistentVolumeClaim?
-
-
-
-Explain Volume Snapshots
-
-
-
-True or False? Kubernetes manages data persistence
-
-False
-
-
-
-Explain Storage Classes
-
-
-
-Explain "Dynamic Provisioning" and "Static Provisioning"
-
-
-
-Explain Access Modes
-
-
-
-What is CSI Volume Cloning?
-
-
-
-Explain "Ephemeral Volumes"
-
-
-
-What types of ephemeral volumes Kubernetes supports?
-
-
-
-What is Reclaim Policy?
-
-
-
-What reclaim policies are there?
-
-* Retain
-* Recycle
-* Delete
-
-
-#### Kubernetes - Access Control
-
-
-What is RBAC?
-
-
-
-Explain the Role
and RoleBinding"
objects
-
-
-
-What is the difference between Role
and ClusterRole
objects?
-
-
-
-Explain what are "Service Accounts" and in which scenario would use create/use one
-
-[Kubernetes.io](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account): "A service account provides an identity for processes that run in a Pod."
-
-An example of when to use one:
-You define a pipeline that needs to build and push an image. In order to have sufficient permissions to build an push an image, that pipeline would require a service account with sufficient permissions.
-
-
-
-What happens you create a pod and you DON'T specify a service account?
-
-The pod is automatically assigned with the default service account (in the namespace where the pod is running).
-
-
-
-Explain how Service Accounts are different from User Accounts
-
- - User accounts are global while Service accounts unique per namespace
- - User accounts are meant for humans or client processes while Service accounts are for processes which run in pods
-
-
-
-How to list Service Accounts?
-
-`kubectl get serviceaccounts`
-
-
-
-Explain "Security Context"
-
-[kubernetes.io](https://kubernetes.io/docs/tasks/configure-pod-container/security-context): "A security context defines privilege and access control settings for a Pod or Container."
-
-
-#### Kubernetes - Patterns
-
-
-Explain the sidecar container pattern
-
-
-#### Kubernetes - CronJob
-
-
-Explain what is CronJob and what is it used for
-
-
-
-What possible issue can arise from using the following spec and how to fix it?
-
-```
-apiVersion: batch/v1beta1
-kind: CronJob
-metadata:
- name: some-cron-job
-spec:
- schedule: '*/1 * * * *'
- startingDeadlineSeconds: 10
- concurrencyPolicy: Allow
-```
-
-
-If the cron job fails, the next job will not replace the previous one due to the "concurrencyPolicy" value which is "Allow". It will keep spawning new jobs and so eventually the system will be filled with failed cron jobs.
-To avoid such problem, the "concurrencyPolicy" value should be either "Replace" or "Forbid".
-
-
-
-What issue might arise from using the following CronJob and how to fix it?
-
-```
-apiVersion: batch/v1beta1
-kind: CronJob
-metadata:
- name: "some-cron-job"
-spec:
- schedule: '*/1 * * * *'
-jobTemplate:
- spec:
- template:
- spec:
- restartPolicy: Never
- concurrencyPolicy: Forbid
- successfulJobsHistoryLimit: 1
- failedJobsHistoryLimit: 1
-```
-
-
-The following lines placed under the template:
-
-```
-concurrencyPolicy: Forbid
-successfulJobsHistoryLimit: 1
-failedJobsHistoryLimit: 1
-```
-
-As a result this configuration isn't part of the cron job spec hence the cron job has no limits which can cause issues like OOM and potentially lead to API server being down.
-To fix it, these lines should placed in the spec of the cron job, above or under the "schedule" directive in the above example.
-
-
-#### Kubernetes - Misc
-
-
-Explain Imperative Management vs. Declarative Management
-
-
-
-Explain what Kubernetes Service Discovery means
-
-
-
-You have one Kubernetes cluster and multiple teams that would like to use it. You would like to limit the resources each team consumes in the cluster. Which Kubernetes concept would you use for that?
-
-Namespaces will allow to limit resources and also make sure there are no collisions between teams when working in the cluster (like creating an app with the same name).
-
-
-
-What Kube Proxy does?
-
-
-
-What "Resources Quotas" are used for and how?
-
-
-
-Explain ConfigMap
-
-Separate configuration from pods.
-It's good for cases where you might need to change configuration at some point but you don't want to restart the application or rebuild the image so you create a ConfigMap and connect it to a pod but externally to the pod.
-
-Overall it's good for:
-* Sharing the same configuration between different pods
-* Storing external to the pod configuration
-
-
-
-How to use ConfigMaps?
-
-1. Create it (from key&value, a file or an env file)
-2. Attach it. Mount a configmap as a volume
-
-
-
-True or False? Sensitive data, like credentials, should be stored in a ConfigMap
-
-False. Use secret.
-
-
-
-Explain "Horizontal Pod Autoscaler"
-
-Scale the number of pods automatically on observed CPU utilization.
-
-
-
-When you delete a pod, is it deleted instantly? (a moment after running the command)
-
-
-
-What does being cloud-native mean?
-
-
-
-Explain the pet and cattle approach of infrastructure with respect to kubernetes
-
-
-
-Describe how you one proceeds to run a containerised web app in K8s, which should be reachable from a public URL.
-
-
-
-How would you troubleshoot your cluster if some applications are not reachable any more?
-
-
-
-Describe what CustomResourceDefinitions there are in the Kubernetes world? What they can be used for?
-
-
-
- How does scheduling work in kubernetes?
-
-The control plane component kube-scheduler asks the following questions,
-1. What to schedule? It tries to understand the pod-definition specifications
-2. Which node to schedule? It tries to determine the best node with available resources to spin a pod
-3. Binds the Pod to a given node
-
-View more [here](https://www.youtube.com/watch?v=rDCWxkvPlAw)
-
-
-
- How are labels and selectors used?
-
-
-
-What QoS classes are there?
-
-* Guaranteed
-* Burstable
-* BestEffort
-
-
-
-Explain Labels. What are they and why would one use them?
-
-
-
-Explain Selectors
-
-
-
-What is Kubeconfig?
-
-
-#### Kubernetes - Gatekeeper
-
-
-What is Gatekeeper?
-
-[Gatekeeper docs](https://open-policy-agent.github.io/gatekeeper/website/docs): "Gatekeeper is a validating (mutating TBA) webhook that enforces CRD-based policies executed by Open Policy Agent"
-
-
-
-Explain how Gatekeeper works
-
-On every request sent to the Kubernetes cluster, Gatekeeper sends the policies and the resources to OPA (Open Policy Agent) to check if it violates any policy. If it does, Gatekeeper will return the policy error message back. If it isn't violates any policy, the request will reach the cluster.
-
-
-#### Kubernetes - Policy Testing
-
-
-What is Conftest?
-
-Conftest allows you to write tests against structured files. You can think of it as tests library for Kubernetes resources.
-It is mostly used in testing environments such as CI pipelines or local hooks.
-
-
-
-What is Datree? How is it different from Conftest?
-
-Same as Conftest, it is used for policy testing and enforcement. The difference is that it comes with built-in policies.
-
-
-#### Kubernetes - Helm
-
-
-What is Helm?
-
-Package manager for Kubernetes. Basically the ability to package YAML files and distribute them to other users and apply them in different clusters.
-
-
-
-Why do we need Helm? What would be the use case for using it?
-
-Sometimes when you would like to deploy a certain application to your cluster, you need to create multiple YAML files/components like: Secret, Service, ConfigMap, etc. This can be tedious task. So it would make sense to ease the process by introducing something that will allow us to share these bundle of YAMLs every time we would like to add an application to our cluster. This something is called Helm.
-
-A common scenario is having multiple Kubernetes clusters (prod, dev, staging). Instead of individually applying different YAMLs in each cluster, it makes more sense to create one Chart and install it in every cluster.
-
-
-
-Explain "Helm Charts"
-
-Helm Charts is a bundle of YAML files. A bundle that you can consume from repositories or create your own and publish it to the repositories.
-
-
-
-It is said that Helm is also Templating Engine. What does it mean?
-
-It is useful for scenarios where you have multiple applications and all are similar, so there are minor differences in their configuration files and most values are the same. With Helm you can define a common blueprint for all of them and the values that are not fixed and change can be placeholders. This is called a template file and it looks similar to the following
-
-```
-apiVersion: v1
-kind: Pod
-metadata:
- name: {[ .Values.name ]}
-spec:
- containers:
- - name: {{ .Values.container.name }}
- image: {{ .Values.container.image }}
- port: {{ .Values.container.port }}
-```
-
-The values themselves will in separate file:
-
-```
-name: some-app
-container:
- name: some-app-container
- image: some-app-image
- port: 1991
-```
-
-
-
-What are some use cases for using Helm template file?
-
-* Deploy the same application across multiple different environments
-* CI/CD
-
-
-
-Explain the Helm Chart Directory Structure
-
-someChart/ -> the name of the chart
- Chart.yaml -> meta information on the chart
- values.yaml -> values for template files
- charts/ -> chart dependencies
- templates/ -> templates files :)
-
-
-
-How do you search for charts?
-
-`helm search hub [some_keyword]`
-
-
-
-Is it possible to override values in values.yaml file when installing a chart?
-Yes. You can pass another values file:
-`helm install --values=override-values.yaml [CHART_NAME]`
-
-Or directly on the command line: `helm install --set some_key=some_value`
-
-
-
-How Helm supports release management?
-
-Helm allows you to upgrade, remove and rollback to previous versions of charts. In version 2 of Helm it was with what is known as "Tiller". In version 3, it was removed due to security concerns.
-
-
-#### Kubernetes - Security
-
-
-What security best practices do you follow in regards to the Kubernetes cluster?
-
- * Secure inter-service communication (one way is to use Istio to provide mutual TLS)
- * Isolate different resources into separate namespaces based on some logical groups
- * Use supported container runtime (if you use Docker then drop it because it's deprecated. You might want to CRI-O as an engine and podman for CLI)
- * Test properly changes to the cluster (e.g. consider using Datree to prevent kubernetes misconfigurations)
- * Limit who can do what (by using for example OPA gatekeeper) in the cluster
- * Use NetworkPolicy to apply network security
- * Consider using tools (e.g. Falco) for monitoring threats
-
-
-#### Kubernetes - Troubleshooting Scenarios
-
-
-Running kubectl get pods
you see Pods in "Pending" status. What would you do?
-
-One possible path is to run `kubectl describe pod ` to get more details.
-You might see one of the following:
- * Cluster is full. In this case, extend the cluster.
- * ResourcesQuota limits are met. In this case you might want to modify them
- * Check if PersistentVolumeClaim mount is pending
-
-If none of the above helped, run the command (`get pods`) with `-o wide` to see if the node is assigned to a node. If not, there might be an issue with scheduler.
-
-
-
-Users unable to reach an application running on a Pod on Kubernetes. What might be the issue and how to check?
-
-One possible path is to start with checking the Pod status.
-1. Is the Pod pending? if yes, check for the reason with `kubectl describe pod `
-TODO: finish this...
-
-
-#### Kubernetes - Submariner
-
-
-Explain what is Submariner and what is it used for
-
-"Submariner enables direct networking between pods and services in different Kubernetes clusters, either on premise or in the cloud."
-
-You can learn more [here](https://submariner-io.github.io)
-
-
-
-What each of the following components does?:
-
- * Lighthouse
- * Broker
- * Gateway Engine
- * Route Agent
-
-
-#### Kubernetes - Istio
-
-
-What is Istio? What is it used for?
-
-
## Programming
@@ -10781,207 +4680,6 @@ Alert manager is responsible for alerts ;)
How do you convert cpu_user_seconds to cpu usage in percentage?
-## Git
-
-|Name|Topic|Objective & Instructions|Solution|Comments|
-|--------|--------|------|----|----|
-| My first Commit | Commit | [Exercise](exercises/git/commit_01.md) | [Solution](exercises/git/solutions/commit_01_solution.md) | |
-| Time to Branch | Branch | [Exercise](exercises/git/branch_01.md) | [Solution](exercises/git/solutions/branch_01_solution.md) | |
-| Squashing Commits | Commit | [Exercise](exercises/git/squashing_commits.md) | [Solution](exercises/git/solutions/squashing_commits.md) | |
-
-
-How do you know if a certain directory is a git repository?
-
-You can check if there is a ".git" directory.
-
-
-
-Explain the following: git directory
, working directory
and staging area
-
-This answer taken from [git-scm.com](https://git-scm.com/book/en/v1/Getting-Started-Git-Basics#_the_three_states)
-
-"The Git directory is where Git stores the meta data and object database for your project. This is the most important part of Git, and it is what is copied when you clone a repository from another computer.
-
-The working directory is a single checkout of one version of the project. These files are pulled out of the compressed database in the Git directory and placed on disk for you to use or modify.
-
-The staging area is a simple file, generally contained in your Git directory, that stores information about what will go into your next commit. It’s sometimes referred to as the index, but it’s becoming standard to refer to it as the staging area."
-
-
-
-What is the difference between git pull
and git fetch
?
-
-Shortly, git pull = git fetch + git merge
-
-When you run git pull, it gets all the changes from the remote or central
-repository and attaches it to your corresponding branch in your local repository.
-
-git fetch gets all the changes from the remote repository, stores the changes in
-a separate branch in your local repository
-
-
-
-How to check if a file is tracked and if not, then track it?
-
-There are different ways to check whether a file is tracked or not:
-
- - `git ls-file ` -> exit code of 0 means it's tracked
- - `git blame `
- ...
-
-
-
-How can you see which changes have done before committing them?
-
-`git diff```
-
-
-
-What git status
does?
-
-
-
-You have two branches - main and devel. How do you make sure devel is in sync with main?
-
-```
-git checkout main
-git pull
-git checkout devel
-git merge main
-```
-
-
-#### Git - Merge
-
-
-You have two branches - main and devel. How do you put devel into main?
-
-git checkout main
-git merge devel
-git push origin main
-
-
-
-How to resolve git merge conflicts?
-
-
-First, you open the files which are in conflict and identify what are the conflicts.
-Next, based on what is accepted in your company or team, you either discuss with your
-colleagues on the conflicts or resolve them by yourself
-After resolving the conflicts, you add the files with `git add `
-Finally, you run `git rebase --continue`
-
-
-
-
-What merge strategies are you familiar with?
-
-Mentioning two or three should be enough and it's probably good to mention that 'recursive' is the default one.
-
-recursive
-resolve
-ours
-theirs
-
-This page explains it the best: https://git-scm.com/docs/merge-strategies
-
-
-
-Explain Git octopus merge
-
-Probably good to mention that it's:
-
- * It's good for cases of merging more than one branch (and also the default of such use cases)
- * It's primarily meant for bundling topic branches together
-
-This is a great article about Octopus merge: http://www.freblogg.com/2016/12/git-octopus-merge.html
-
-
-
-What is the difference between git reset
and git revert
?
-
-
-
-`git revert` creates a new commit which undoes the changes from last commit.
-
-`git reset` depends on the usage, can modify the index or change the commit which the branch head
-is currently pointing at.
-
-
-
-#### Git - Rebase
-
-
-You would like to move forth commit to the top. How would you achieve that?
-
-Using the `git rebase` command
-
-
-
-In what situations are you using git rebase
?
-
-
-
-How do you revert a specific file to previous commit?
-
-```
-git checkout HEAD~1 -- /path/of/the/file
-```
-
-
-
-How to squash last two commits?
-
-
-
-What is the .git
directory? What can you find there?
- The .git
folder contains all the information that is necessary for your project in version control and all the information about commits, remote repository address, etc. All of them are present in this folder. It also contains a log that stores your commit history so that you can roll back to history.
-
-
-This info copied from [https://stackoverflow.com/questions/29217859/what-is-the-git-folder](https://stackoverflow.com/questions/29217859/what-is-the-git-folder)
-
-
-
-What are some Git anti-patterns? Things that you shouldn't do
-
- * Not waiting too long between commits
- * Not removing the .git directory :)
-
-
-
-How do you remove a remote branch?
-
-You delete a remote branch with this syntax:
-
-git push origin :[branch_name]
-
-
-
-Are you familiar with gitattributes? When would you use it?
-
-gitattributes allow you to define attributes per pathname or path pattern.
-
-You can use it for example to control endlines in files. In Windows and Unix based systems, you have different characters for new lines (\r\n and \n accordingly). So using gitattributes we can align it for both Windows and Unix with `* text=auto` in .gitattributes for anyone working with git. This is way, if you use the Git project in Windows you'll get \r\n and if you are using Unix or Linux, you'll get \n.
-
-
-
-How do you discard local file changes? (before commit)
-
-`git checkout -- `
-
-
-
-How do you discard local commits?
-
-`git reset HEAD~1` for removing last commit
-If you would like to also discard the changes you `git reset --hard``
-
-
-
-True or False? To remove a file from git but not from the filesystem, one should use git rm
-
-False. If you would like to keep a file on your filesystem, use `git reset `
-
-
## Go
diff --git a/exercises/ansible/README.md b/exercises/ansible/README.md
new file mode 100644
index 0000000..a55f756
--- /dev/null
+++ b/exercises/ansible/README.md
@@ -0,0 +1,526 @@
+## Ansible
+
+### Ansible Exercises
+
+|Name|Topic|Objective & Instructions|Solution|Comments|
+|--------|--------|------|----|----|
+| My First Task | Tasks | [Exercise](my_first_task.md) | [Solution](solutions/my_first_task.md)
+| Upgrade and Update Task | Tasks | [Exercise](update_upgrade_task.md) | [Solution](solutions/update_upgrade_task.md)
+| My First Playbook | Playbooks | [Exercise](my_first_playbook.md) | [Solution](solutions/my_first_playbook.md)
+
+
+### Ansible Self Assesment
+
+
+Describe each of the following components in Ansible, including the relationship between them:
+
+ * Task
+ * Module
+ * Play
+ * Playbook
+ * Role
+
+
+Task – a call to a specific Ansible module
+Module – the actual unit of code executed by Ansible on your own host or a remote host. Modules are indexed by category (database, file, network, …) and also referred to as task plugins.
+
+Play – One or more tasks executed on a given host(s)
+
+Playbook – One or more plays. Each play can be executed on the same or different hosts
+
+Role – Ansible roles allows you to group resources based on certain functionality/service such that they can be easily reused. In a role, you have directories for variables, defaults, files, templates, handlers, tasks, and metadata. You can then use the role by simply specifying it in your playbook.
+
+
+
+How Ansible is different from other automation tools? (e.g. Chef, Puppet, etc.)
+
+Ansible is:
+
+* Agentless
+* Minimal run requirements (Python & SSH) and simple to use
+* Default mode is "push" (it supports also pull)
+* Focus on simpleness and ease-of-use
+
+
+
+
+True or False? Ansible follows the mutable infrastructure paradigm
+
+True. In immutable infrastructure approach, you'll replace infrastructure instead of modifying it.
+Ansible rather follows the mutable infrastructure paradigm where it allows you to change the configuration of different components, but this approach is not perfect and has its own disadvantges like "configuration drift" where different components may reach different state for different reasons.
+
+
+
+True or False? Ansible uses declarative style to describe the expected end state
+
+False. It uses a procedural style.
+
+
+
+What kind of automation you wouldn't do with Ansible and why?
+
+While it's possible to provision resources with Ansible, some prefer to use tools that follow immutable infrastructure paradigm.
+Ansible doesn't saves state by default. So a task that creates 5 instances for example, when executed again will create additional 5 instances (unless
+additional check is implemented or explicit names are provided) while other tools might check if 5 instances exist. If only 4 exist (by checking the state file for example), one additional instance will be created to reach the end goal of 5 instances.
+
+
+
+How do you list all modules and how can you see details on a specific module?
+
+1. Ansible online docs
+2. `ansible-doc -l` for list of modules and `ansible-doc [module_name]` for detailed information on a specific module
+
+
+#### Ansible - Inventory
+
+
+What is an inventory file and how do you define one?
+
+An inventory file defines hosts and/or groups of hosts on which Ansible tasks executed upon.
+
+An example of inventory file:
+
+```
+192.168.1.2
+192.168.1.3
+192.168.1.4
+
+[web_servers]
+190.40.2.20
+190.40.2.21
+190.40.2.22
+```
+
+
+
+What is a dynamic inventory file? When you would use one?
+
+A dynamic inventory file tracks hosts from one or more sources like cloud providers and CMDB systems.
+
+You should use one when using external sources and especially when the hosts in your environment are being automatically
+spun up and shut down, without you tracking every change in these sources.
+
+
+#### Ansible - Variables
+
+
+Modify the following task to use a variable instead of the value "zlib" and have "zlib" as the default in case the variable is not defined
+
+```
+- name: Install a package
+ package:
+ name: "zlib"
+ state: present
+```
+
+
+```
+- name: Install a package
+ package:
+ name: "{{ package_name|default('zlib') }}"
+ state: present
+```
+
+
+
+How to make the variable "use_var" optional?
+
+```
+- name: Install a package
+ package:
+ name: "zlib"
+ state: present
+ use: "{{ use_var }}"
+```
+
+
+
+With "default(omit)"
+```
+- name: Install a package
+ package:
+ name: "zlib"
+ state: present
+ use: "{{ use_var|default(omit) }}"
+```
+
+
+
+What would be the result of the following play?
+
+```
+---
+- name: Print information about my host
+ hosts: localhost
+ gather_facts: 'no'
+ tasks:
+ - name: Print hostname
+ debug:
+ msg: "It's me, {{ ansible_hostname }}"
+```
+
+When given a written code, always inspect it thoroughly. If your answer is “this will fail” then you are right. We are using a fact (ansible_hostname), which is a gathered piece of information from the host we are running on. But in this case, we disabled facts gathering (gather_facts: no) so the variable would be undefined which will result in failure.
+
+
+
+When the value '2017'' will be used in this case: `{{ lookup('env', 'BEST_YEAR') | default('2017', true) }}`?
+
+when the environment variable 'BEST_YEAR' is empty or false.
+
+
+
+If the value of certain variable is 1, you would like to use the value "one", otherwise, use "two". How would you do it?
+
+`{{ (certain_variable == 1) | ternary("one", "two") }}`
+
+
+
+The value of a certain variable you use is the string "True". You would like the value to be a boolean. How would you cast it?
+
+`{{ some_string_var | bool }}`
+
+
+
+You want to run Ansible playbook only on specific minor version of your OS, how would you achieve that?
+
+
+
+What the "become" directive used for in Ansible?
+
+
+
+What are facts? How to see all the facts of a certain host?
+
+
+
+What would be the result of running the following task? How to fix it?
+
+```
+- hosts: localhost
+ tasks:
+ - name: Install zlib
+ package:
+ name: zlib
+ state: present
+```
+
+
+
+
+Which Ansible best practices are you familiar with?. Name at least three
+
+
+
+Explain the directory layout of an Ansible role
+
+
+
+What 'blocks' are used for in Ansible?
+
+
+
+How do you handle errors in Ansible?
+
+
+
+You would like to run a certain command if a task fails. How would you achieve that?
+
+
+
+Write a playbook to install ‘zlib’ and ‘vim’ on all hosts if the file ‘/tmp/mario’ exists on the system.
+
+```
+---
+- hosts: all
+ vars:
+ mario_file: /tmp/mario
+ package_list:
+ - 'zlib'
+ - 'vim'
+ tasks:
+ - name: Check for mario file
+ stat:
+ path: "{{ mario_file }}"
+ register: mario_f
+
+ - name: Install zlib and vim if mario file exists
+ become: "yes"
+ package:
+ name: "{{ item }}"
+ state: present
+ with_items: "{{ package_list }}"
+ when: mario_f.stat.exists
+```
+
+
+
+Write a single task that verifies all the files in files_list variable exist on the host
+
+```
+- name: Ensure all files exist
+ assert:
+ that:
+ - item.stat.exists
+ loop: "{{ files_list }}"
+```
+
+
+
+Write a playbook to deploy the file ‘/tmp/system_info’ on all hosts except for controllers group, with the following content
+
+ ```
+ I'm and my operating system is
+ ```
+
+ Replace and with the actual data for the specific host you are running on
+
+The playbook to deploy the system_info file
+
+```
+---
+- name: Deploy /tmp/system_info file
+ hosts: all:!controllers
+ tasks:
+ - name: Deploy /tmp/system_info
+ template:
+ src: system_info.j2
+ dest: /tmp/system_info
+```
+
+The content of the system_info.j2 template
+
+```
+# {{ ansible_managed }}
+I'm {{ ansible_hostname }} and my operating system is {{ ansible_distribution }
+```
+
+
+
+The variable 'whoami' defined in the following places:
+
+ * role defaults -> whoami: mario
+ * extra vars (variables you pass to Ansible CLI with -e) -> whoami: toad
+ * host facts -> whoami: luigi
+ * inventory variables (doesn’t matter which type) -> whoami: browser
+
+According to variable precedence, which one will be used?
+
+The right answer is ‘toad’.
+
+Variable precedence is about how variables override each other when they set in different locations. If you didn’t experience it so far I’m sure at some point you will, which makes it a useful topic to be aware of.
+
+In the context of our question, the order will be extra vars (always override any other variable) -> host facts -> inventory variables -> role defaults (the weakest).
+
+Here is the order of precedence from least to greatest (the last listed variables winning prioritization):
+
+1. command line values (eg “-u user”)
+2. role defaults [[1\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id15)
+3. inventory file or script group vars [[2\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id16)
+4. inventory group_vars/all [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
+5. playbook group_vars/all [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
+6. inventory group_vars/* [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
+7. playbook group_vars/* [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
+8. inventory file or script host vars [[2\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id16)
+9. inventory host_vars/* [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
+10. playbook host_vars/* [[3\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id17)
+11. host facts / cached set_facts [[4\]](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#id18)
+12. play vars
+13. play vars_prompt
+14. play vars_files
+15. role vars (defined in role/vars/main.yml)
+16. block vars (only for tasks in block)
+17. task vars (only for the task)
+18. include_vars
+19. set_facts / registered vars
+20. role (and include_role) params
+21. include params
+22. extra vars (always win precedence)
+
+A full list can be found at [PlayBook Variables](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#ansible-variable-precedence) . Also, note there is a significant difference between Ansible 1.x and 2.x.
+
+
+
+For each of the following statements determine if it's true or false:
+
+ * A module is a collection of tasks
+ * It’s better to use shell or command instead of a specific module
+ * Host facts override play variables
+ * A role might include the following: vars, meta, and handlers
+ * Dynamic inventory is generated by extracting information from external sources
+ * It’s a best practice to use indention of 2 spaces instead of 4
+ * ‘notify’ used to trigger handlers
+ * This “hosts: all:!controllers” means ‘run only on controllers group hosts
+
+
+
+Explain the Diffrence between Forks and Serial & Throttle.
+
+`Serial` is like running the playbook for each host in turn, waiting for completion of the complete playbook before moving on to the next host. `forks`=1 means run the first task in a play on one host before running the same task on the next host, so the first task will be run for each host before the next task is touched. Default fork is 5 in ansible.
+
+```
+[defaults]
+forks = 30
+```
+
+```
+- hosts: webservers
+ serial: 1
+ tasks:
+ - name: ...
+```
+
+Ansible also supports `throttle` This keyword limits the number of workers up to the maximum set via the forks setting or serial. This can be useful in restricting tasks that may be CPU-intensive or interact with a rate-limiting API
+
+```
+tasks:
+- command: /path/to/cpu_intensive_command
+ throttle: 1
+```
+
+
+
+
+What is ansible-pull? How is it different from how ansible-playbook works?
+
+
+
+What is Ansible Vault?
+
+
+
+Demonstrate each of the following with Ansible:
+
+ * Conditionals
+ * Loops
+
+
+
+
+What are filters? Do you have experience with writing filters?
+
+
+
+Write a filter to capitalize a string
+
+```
+def cap(self, string):
+ return string.capitalize()
+```
+
+
+
+You would like to run a task only if previous task changed anything. How would you achieve that?
+
+
+
+What are callback plugins? What can you achieve by using callback plugins?
+
+
+
+What is Ansible Collections?
+
+
+
+What is the difference between `include_task` and `import_task`?
+
+
+
+File '/tmp/exercise' includes the following content
+
+```
+Goku = 9001
+Vegeta = 5200
+Trunks = 6000
+Gotenks = 32
+```
+
+With one task, switch the content to:
+
+```
+Goku = 9001
+Vegeta = 250
+Trunks = 40
+Gotenks = 32
+```
+
+
+```
+- name: Change saiyans levels
+ lineinfile:
+ dest: /tmp/exercise
+ regexp: "{{ item.regexp }}"
+ line: "{{ item.line }}"
+ with_items:
+ - { regexp: '^Vegeta', line: 'Vegeta = 250' }
+ - { regexp: '^Trunks', line: 'Trunks = 40' }
+ ...
+```
+
+
+
+#### Ansible - Execution and Strategy
+
+
+True or False? By default, Ansible will execute all the tasks in play on a single host before proceeding to the next host
+
+False. Ansible will execute a single task on all hosts before moving to the next task in a play. As for today, it uses 5 forks by default.
+This behaviour is described as "strategy" in Ansible and it's configurable.
+
+
+
+What is a "strategy" in Ansible? What is the default strategy?
+
+A strategy in Ansible describes how Ansible will execute the different tasks on the hosts. By default Ansible is using the "Linear strategy" which defines that each task will run on all hosts before proceeding to the next task.
+
+
+
+What strategies are you familiar with in Ansible?
+
+ - Linear: the default strategy in Ansible. Run each task on all hosts before proceeding.
+ - Free: For each host, run all the tasks until the end of the play as soon as possible
+ - Debug: Run tasks in an interactive way
+
+
+
+What the serial
keyword is used for?
+
+It's used to specify the number (or percentage) of hosts to run the full play on, before moving to next number of hosts in the group.
+
+For example:
+```
+- name: Some play
+ hosts: databases
+ serial: 4
+```
+
+If your group has 8 hosts. It will run the whole play on 4 hosts and then the same play on another 4 hosts.
+
+
+#### Ansible Testing
+
+
+How do you test your Ansible based projects?
+
+
+
+What is Molecule? How does it works?
+
+
+
+You run Ansibe tests and you get "idempotence test failed". What does it mean? Why idempotence is important?
+
+
+#### Ansible - Debugging
+
+
+How to find out the data type of a certain variable in one of the playbooks?
+
+"{{ some_var | type_debug }}"
+
+
+#### Ansible - Collections
+
+
+What are collections in Ansible?
+
+
diff --git a/exercises/ansible_minikube_docker.md b/exercises/ansible_minikube_docker.md
deleted file mode 100644
index 1ea4bfc..0000000
--- a/exercises/ansible_minikube_docker.md
+++ /dev/null
@@ -1,6 +0,0 @@
-## Ansible, Minikube and Docker
-
-* Write a simple program in any language you want that outputs "I'm on %HOSTNAME%" (HOSTNAME should be the actual host name on which the app is running)
-* Write a Dockerfile which will run your app
-* Create the YAML files required for deploying the pods
-* Write and run an Ansible playbook which will install Docker, Minikube and kubectl and then create a deployment in minikube with your app running.
diff --git a/exercises/aws/README.md b/exercises/aws/README.md
new file mode 100644
index 0000000..6681f6e
--- /dev/null
+++ b/exercises/aws/README.md
@@ -0,0 +1,1446 @@
+## AWS
+
+### AWS Exercises
+
+#### AWS - IAM
+
+|Name|Topic|Objective & Instructions|Solution|Comments|
+|--------|--------|------|----|----|
+| Create a User | IAM | [Exercise](create_user.md) | [Solution](solutions/create_user.md) | |
+| Password Policy | IAM | [Exercise](password_policy_and_mfa.md) | [Solution](solutions/password_policy_and_mfa.md) | |
+| Create a role | IAM | [Exercise](create_role.md) | [Solution](solutions/create_role.md) | |
+| Credential Report | IAM | [Exercise](credential_report.md) | [Solution](solutions/credential_report.md) | |
+| Access Advisor | IAM | [Exercise](access_advisor.md) | [Solution](solutions/access_advisor.md) | |
+
+#### AWS - Lambda
+
+|Name|Topic|Objective & Instructions|Solution|Comments|
+|--------|--------|------|----|----|
+| Hello Function | Lambda | [Exercise](hello_function.md) | [Solution](solutions/hello_function.md) | |
+| URL Function | Lambda | [Exercise](url_function.md) | [Solution](solutions/url_function.md) | |
+
+### AWS Self Assessment
+
+#### AWS - Global Infrastructure
+
+
+Explain the following
+
+ * Availability zone
+ * Region
+ * Edge location
+
+AWS regions are data centers hosted across different geographical locations worldwide.
+
+Within each region, there are multiple isolated locations known as Availability Zones. Each availability zone is one or more data-centers with redundant network and connectivity and power supply. Multiple availability zones ensure high availability in case one of them goes down.
+
+Edge locations are basically content delivery network which caches data and insures lower latency and faster delivery to the users in any location. They are located in major cities in the world.
+
+
+
+True or False? Each AWS region is designed to be completely isolated from the other AWS regions
+
+True.
+
+
+
+True or False? Each region has a minimum number of 1 availability zones and the maximum is 4
+
+False. The minimum is 2 while the maximum is 6.
+
+
+
+What considerations to take when choosing an AWS region for running a new application?
+
+* Services Availability: not all service (and all their features) are available in every region
+* Reduced latency: deploy application in a region that is close to customers
+* Compliance: some countries have more strict rules and requirements such as making sure the data stays within the borders of the country or the region. In that case, only specific region can be used for running the application
+* Pricing: the pricing might not be consistent across regions so, the price for the same service in different regions might be different.
+
+
+#### AWS - IAM
+
+
+What is IAM? What are some of its features?
+
+In short, it's used for managing users, groups, access policies & roles
+Full explanation can be found [here](https://aws.amazon.com/iam)
+
+
+
+True or False? IAM configuration is defined globally and not per region
+
+True
+
+
+
+True or False? When creating an AWS account, root account is created by default. This is the recommended account to use and share in your organization
+
+False. Instead of using the root account, you should be creating users and use them.
+
+
+
+True or False? Groups in AWS IAM, can contain only users and not other groups
+
+True
+
+
+
+True or False? Users in AWS IAM, can belong only to a single group
+
+False. Users can belong to multiple groups.
+
+
+
+What are some best practices regarding IAM in AWS?
+
+* Delete root account access keys and don't use root account regularly
+* Create IAM user for any physical user. Don't share users.
+* Apply "least privilege principle": give users only the permissions they need, nothing more than that.
+* Set up MFA and consider enforcing using it
+* Make use of groups to assign permissions ( user -> group -> permissions )
+
+
+
+What permissions does a new user have?
+
+Only a login access.
+
+
+
+True or False? If a user in AWS is using password for authenticating, he doesn't needs to enable MFA
+
+False(!). MFA is a great additional security layer to use for authentication.
+
+
+
+What ways are there to access AWS?
+
+ * AWS Management Console
+ * AWS CLI
+ * AWS SDK
+
+
+
+What are Roles?
+
+[AWS docs](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html): "An IAM role is an IAM identity that you can create in your account that has specific permissions...it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS."
+For example, you can make use of a role which allows EC2 service to access s3 buckets (read and write).
+
+
+
+What are Policies?
+
+Policies documents used to give permissions as to what a user, group or role are able to do. Their format is JSON.
+
+
+
+A user is unable to access an s3 bucket. What might be the problem?
+
+There can be several reasons for that. One of them is lack of policy. To solve that, the admin has to attach the user with a policy what allows him to access the s3 bucket.
+
+
+
+What should you use to:
+
+ - Grant access between two services/resources?
+ - Grant user access to resources/services?
+
+ * Role
+ * Policy
+
+
+
+What statements AWS IAM policies support?
+
+* Sid: identifier of the statement (optional)
+* Effect: allow or deny access
+* Action: list of actions (to deny or allow)
+* Resource: a list of resources to which the actions are applied
+* Principal: role or account or user to which to apply the policy
+* Condition: conditions to determine when the policy is applied (optional)
+
+
+
+Explain the following policy:
+
+```
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect:": "Allow",
+ "Action": "*",
+ "Resources": "*"
+ }
+ ]
+}
+```
+
+
+This policy permits to perform any action on any resource. It happens to be the "AdministratorAccess" policy.
+
+
+
+What security tools AWS IAM provides?
+
+* IAM Credentials Report: lists all the account users and the status of their credentials
+* IAM Access Advisor: Shows service permissions granted to a user and information on when he accessed these services the last time
+
+
+
+Which tool would you use to optimize user permissions by identifying which services he doesn't regularly (or at all) access?
+
+IAM Access Advisor
+
+
+#### AWS - Compute
+
+
+What is EC2?
+
+"a web service that provides secure, resizable compute capacity in the cloud".
+Read more [here](https://aws.amazon.com/ec2)
+
+
+
+True or False? EC2 is a regional service
+
+True. As opposed to IAM for example, which is a global service, EC2 is a regional service.
+
+
+
+What is AMI?
+
+Amazon Machine Images is "An Amazon Machine Image (AMI) provides the information required to launch an instance".
+Read more [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)
+
+
+
+What are the different source for AMIs?
+
+* Personal AMIs - AMIs you create
+* AWS Marketplace for AMIs - Paid AMIs usually with bundled with licensed software
+* Community AMIs - Free
+
+
+
+What is instance type?
+
+"the instance type that you specify determines the hardware of the host computer used for your instance"
+Read more about instance types [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html)
+
+
+
+True or False? The following are instance types available for a user in AWS:
+
+ * Compute optimizied
+ * Network optimizied
+ * Web optimized
+
+False. From the above list only compute optimized is available.
+
+
+
+What is EBS?
+
+"provides block level storage volumes for use with EC2 instances. EBS volumes behave like raw, unformatted block devices."
+More on EBS [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html)
+
+
+
+What EC2 pricing models are there?
+
+On Demand - pay a fixed rate by the hour/second with no commitment. You can provision and terminate it at any given time.
+Reserved - you get capacity reservation, basically purchase an instance for a fixed time of period. The longer, the cheaper.
+Spot - Enables you to bid whatever price you want for instances or pay the spot price.
+Dedicated Hosts - physical EC2 server dedicated for your use.
+
+
+
+What are Security Groups?
+
+"A security group acts as a virtual firewall that controls the traffic for one or more instances"
+More on this subject [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html)
+
+
+
+How to migrate an instance to another availability zone?
+
+
+
+What can you attach to an EC2 instance in order to store data?
+
+EBS
+
+
+
+What EC2 RI types are there?
+
+Standard RI - most significant discount + suited for steady-state usage
+Convertible RI - discount + change attribute of RI + suited for steady-state usage
+Scheduled RI - launch within time windows you reserve
+
+Learn more about EC2 RI [here](https://aws.amazon.com/ec2/pricing/reserved-instances)
+
+
+
+You would like to invoke a function every time you enter a URL in the browser. Which service would you use for that?
+
+AWS Lambda
+
+
+#### AWS - Lambda
+
+
+Explain what is AWS Lambda
+
+AWS definition: "AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume."
+
+Read more on it [here](https://aws.amazon.com/lambda)
+
+
+
+True or False? In AWS Lambda, you are charged as long as a function exists, regardless of whether it's running or not
+
+False. Charges are being made when the code is executed.
+
+
+
+Which of the following set of languages Lambda supports?
+
+- R, Swift, Rust, Kotlin
+- Python, Ruby, Go
+- Python, Ruby, PHP
+
+
+- Python, Ruby, Go
+
+
+
+True or False? Basic lambda permissions allow you only to upload logs to Amazon CloudWatch Logs
+
+True
+
+
+#### AWS Containers
+
+
+What is Amazon ECS?
+
+Amazon definition: "Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service. Customers such as Duolingo, Samsung, GE, and Cook Pad use ECS to run their most sensitive and mission critical applications because of its security, reliability, and scalability."
+
+Learn more [here](https://aws.amazon.com/ecs)
+
+
+
+What is Amazon ECR?
+
+Amazon definition: "Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images."
+
+Learn more [here](https://aws.amazon.com/ecr)
+
+
+
+What is AWS Fargate?
+
+Amazon definition: "AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS)."
+
+Learn more [here](https://aws.amazon.com/fargate)
+
+
+#### AWS Storage
+
+
+Explain what is AWS S3?
+
+S3 stands for 3 S, Simple Storage Service.
+S3 is a object storage service which is fast, scalable and durable. S3 enables customers to upload, download or store any file or object that is up to 5 TB in size.
+
+More on S3 [here](https://aws.amazon.com/s3)
+
+
+
+What is a bucket?
+
+An S3 bucket is a resource which is similar to folders in a file system and allows storing objects, which consist of data.
+
+
+
+True or False? A bucket name must be globally unique
+
+True
+
+
+
+Explain folders and objects in regards to buckets
+
+* Folder - any sub folder in an s3 bucket
+* Object - The files which are stored in a bucket
+
+
+
+Explain the following:
+
+ - Object Lifecycles
+ - Object Sharing
+ - Object Versioning
+
+ * Object Lifecycles - Transfer objects between storage classes based on defined rules of time periods
+ * Object Sharing - Share objects via a URL link
+ * Object Versioning - Manage multiple versions of an object
+
+
+
+Explain Object Durability and Object Availability
+
+Object Durability: The percent over a one-year time period that a file will not be lost
+Object Availability: The percent over a one-year time period that a file will be accessible
+
+
+
+What is a storage class? What storage classes are there?
+
+Each object has a storage class assigned to, affecting its availability and durability. This also has effect on costs.
+Storage classes offered today:
+ * Standard:
+ * Used for general, all-purpose storage (mostly storage that needs to be accessed frequently)
+ * The most expensive storage class
+ * 11x9% durability
+ * 2x9% availability
+ * Default storage class
+
+ * Standard-IA (Infrequent Access)
+ * Long lived, infrequently accessed data but must be available the moment it's being accessed
+ * 11x9% durability
+ * 99.90% availability
+
+ * One Zone-IA (Infrequent Access):
+ * Long-lived, infrequently accessed, non-critical data
+ * Less expensive than Standard and Standard-IA storage classes
+ * 2x9% durability
+ * 99.50% availability
+
+ * Intelligent-Tiering:
+ * Long-lived data with changing or unknown access patterns. Basically, In this class the data automatically moves to the class most suitable for you based on usage patterns
+ * Price depends on the used class
+ * 11x9% durability
+ * 99.90% availability
+
+ * Glacier: Archive data with retrieval time ranging from minutes to hours
+ * Glacier Deep Archive: Archive data that rarely, if ever, needs to be accessed with retrieval times in hours
+ * Both Glacier and Glacier Deep Archive are:
+ * The most cheap storage classes
+ * have 9x9% durability
+
+More on storage classes [here](https://aws.amazon.com/s3/storage-classes)
+
+
+
+
+A customer would like to move data which is rarely accessed from standard storage class to the most cheapest class there is. Which storage class should be used?
+
+ * One Zone-IA
+ * Glacier Deep Archive
+ * Intelligent-Tiering
+
+Glacier Deep Archive
+
+
+
+What Glacier retrieval options are available for the user?
+
+Expedited, Standard and Bulk
+
+
+
+True or False? Each AWS account can store up to 500 PetaByte of data. Any additional storage will cost double
+
+False. Unlimited capacity.
+
+
+
+Explain what is Storage Gateway
+
+"AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage".
+More on Storage Gateway [here](https://aws.amazon.com/storagegateway)
+
+
+
+Explain the following Storage Gateway deployments types
+
+ * File Gateway
+ * Volume Gateway
+ * Tape Gateway
+
+Explained in detail [here](https://aws.amazon.com/storagegateway/faqs)
+
+
+
+What is the difference between stored volumes and cached volumes?
+
+Stored Volumes - Data is located at customer's data center and periodically backed up to AWS
+Cached Volumes - Data is stored in AWS cloud and cached at customer's data center for quick access
+
+
+
+What is "Amazon S3 Transfer Acceleration"?
+
+AWS definition: "Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket"
+
+Learn more [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html)
+
+
+
+Explain data consistency
+ S3 Data Consistency provides strong read-after-write consistency for PUT and DELETE requests of objects in the S3 bucket in all AWS Regions. S3 always return latest file version.
+
+
+
+Can you host dynamic websites on S3? What about static websites?
+ No. S3 support only statis hosts. On a static website, individual webpages include static content. They might also contain client-side scripts. By contrast, a dynamic website relies on server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET. Amazon S3 does not support server-side scripting.
+
+
+
+What security measures have you taken in context of S3?
+ * Enable versioning.
+ * Don't make bucket public.
+ * Enable encryption if it's disabled.
+
+
+
+What storage options are there for EC2 Instances?
+
+
+
+What is Amazon EFS?
+
+Amazon definition: "Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources."
+
+Learn more [here](https://aws.amazon.com/efs)
+
+
+
+What is AWS Snowmobile?
+
+"AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS."
+
+Learn more [here](https://aws.amazon.com/snowmobile)
+
+
+#### AWS Disaster Recovery
+
+
+In regards to disaster recovery, what is RTO and RPO?
+
+RTO - The maximum acceptable length of time that your application can be offline.
+
+RPO - The maximum acceptable length of time during which data might be lost from your application due to an incident.
+
+
+
+What types of disaster recovery techniques AWS supports?
+
+* The Cold Method - Periodically backups and sending the backups off-site
+* Pilot Light - Data is mirrored to an environment which is always running
+* Warm Standby - Running scaled down version of production environment
+* Multi-site - Duplicated environment that is always running
+
+
+
+Which disaster recovery option has the highest downtime and which has the lowest?
+
+Lowest - Multi-site
+Highest - The cold method
+
+
+#### AWS CloudFront
+
+
+Explain what is CloudFront
+
+AWS definition: "Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment."
+
+More on CloudFront [here](https://aws.amazon.com/cloudfront)
+
+
+
+Explain the following
+
+ * Origin
+ * Edge location
+ * Distribution
+
+
+
+What delivery methods available for the user with CDN?
+
+
+
+True or False?. Objects are cached for the life of TTL
+
+True
+
+
+
+What is AWS Snowball?
+
+A transport solution which was designed for transferring large amounts of data (petabyte-scale) into and out the AWS cloud.
+
+
+##### AWS ELB
+
+
+What is ELB (Elastic Load Balancing)?
+
+AWS definition: "Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions."
+
+More on ELB [here](https://aws.amazon.com/elasticloadbalancing)
+
+
+
+What types of load balancers are supported in EC2 and what are they used for?
+
+ * Application LB - layer 7 traffic
+ * Network LB - ultra-high performances or static IP address (layer 4)
+ * Classic LB - low costs, good for test or dev environments (retired by August 15, 2022)
+ * Gateway LB - transparent network gateway and and distributes traffic such as firewalls, intrusion detection and prevention systems, and deep packet inspection systems. (layer 3)
+
+
+#### AWS Security
+
+
+What is the shared responsibility model? What AWS is responsible for and what the user is responsible for based on the shared responsibility model?
+
+The shared responsibility model defines what the customer is responsible for and what AWS is responsible for.
+
+More on the shared responsibility model [here](https://aws.amazon.com/compliance/shared-responsibility-model)
+
+
+
+True or False? Based on the shared responsibility model, Amazon is responsible for physical CPUs and security groups on instances
+
+False. It is responsible for Hardware in its sites but not for security groups which created and managed by the users.
+
+
+
+Explain "Shared Controls" in regards to the shared responsibility model
+
+AWS definition: "apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services"
+
+Learn more about it [here](https://aws.amazon.com/compliance/shared-responsibility-model)
+
+
+
+What is the AWS compliance program?
+
+
+
+How to secure instances in AWS?
+
+ * Instance IAM roles should have minimal permissions needed. You don't want an instance-level incident to become an account-level incident
+ * Use "AWS System Manager Session Manager" for SSH
+ * Using latest OS images with your instances
+
+
+
+What is AWS Artifact?
+
+AWS definition: "AWS Artifact is your go-to, central resource for compliance-related information that matters to you. It provides on-demand access to AWS’ security and compliance reports and select online agreements."
+
+Read more about it [here](https://aws.amazon.com/artifact)
+
+
+
+What is AWS Inspector?
+
+AWS definition: "Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.""
+
+Learn more [here](https://aws.amazon.com/inspector)
+
+
+
+What is AWS Guarduty?
+AWS definition: "Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your Amazon Web Services accounts, workloads, and data stored in Amazon S3"
+Monitor VPC Flow lows, DNS logs, CloudTrail S3 events and CloudTrail Mgmt events.
+
+
+
+What is AWS Shield?
+
+AWS definition: "AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS."
+
+
+
+What is AWS WAF? Give an example of how it can used and describe what resources or services you can use it with
+
+
+
+What AWS VPN is used for?
+
+
+
+What is the difference between Site-to-Site VPN and Client VPN?
+
+
+
+What is AWS CloudHSM?
+
+Amazon definition: "AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud."
+
+Learn more [here](https://aws.amazon.com/cloudhsm)
+
+
+
+True or False? AWS Inspector can perform both network and host assessments
+
+True
+
+
+
+What is AWS Key Management Service (KMS)?
+
+AWS definition: "KMS makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications."
+More on KMS [here](https://aws.amazon.com/kms)
+
+
+
+What is AWS Acceptable Use Policy?
+
+It describes prohibited uses of the web services offered by AWS.
+More on AWS Acceptable Use Policy [here](https://aws.amazon.com/aup)
+
+
+
+True or False? A user is not allowed to perform penetration testing on any of the AWS services
+
+False. On some services, like EC2, CloudFront and RDS, penetration testing is allowed.
+
+
+
+True or False? DDoS attack is an example of allowed penetration testing activity
+
+False.
+
+
+
+True or False? AWS Access Key is a type of MFA device used for AWS resources protection
+
+False. Security key is an example of an MFA device.
+
+
+
+What is Amazon Cognito?
+
+Amazon definition: "Amazon Cognito handles user authentication and authorization for your web and mobile apps."
+
+Learn more [here](https://docs.aws.amazon.com/cognito/index.html)
+
+
+
+What is AWS ACM?
+
+Amazon definition: "AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources."
+
+Learn more [here](https://aws.amazon.com/certificate-manager)
+
+
+#### AWS Databases
+
+
+What is AWS RDS?
+
+
+
+What is AWS DynamoDB?
+
+
+
+Explain "Point-in-Time Recovery" feature in DynamoDB
+
+Amazon definition: "You can create on-demand backups of your Amazon DynamoDB tables, or you can enable continuous backups using point-in-time recovery. For more information about on-demand backups, see On-Demand Backup and Restore for DynamoDB."
+
+Learn more [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery.html)
+
+
+
+Explain "Global Tables" in DynamoDB
+
+Amazon definition: "A global table is a collection of one or more replica tables, all owned by a single AWS account."
+
+Learn more [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/V2globaltables_HowItWorks.html)
+
+
+
+What is DynamoDB Accelerator?
+
+Amazon definition: "Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds..."
+
+Learn more [here](https://aws.amazon.com/dynamodb/dax)
+
+
+
+What is AWS Redshift and how is it different than RDS?
+
+cloud data warehouse
+
+
+
+What do you if you suspect AWS Redshift performs slowly?
+
+* You can confirm your suspicion by going to AWS Redshift console and see running queries graph. This should tell you if there are any long-running queries.
+* If confirmed, you can query for running queries and cancel the irrelevant queries
+* Check for connection leaks (query for running connections and include their IP)
+* Check for table locks and kill irrelevant locking sessions
+
+
+
+What is AWS ElastiCache? For what cases is it used?
+
+Amazon Elasticache is a fully managed Redis or Memcached in-memory data store.
+It's great for use cases like two-tier web applications where the most frequently accesses data is stored in ElastiCache so response time is optimal.
+
+
+
+What is Amazon Aurora
+
+A MySQL & Postgresql based relational database. Also, the default database proposed for the user when using RDS for creating a database.
+Great for use cases like two-tier web applications that has a MySQL or Postgresql database layer and you need automated backups for your application.
+
+
+
+What is Amazon DocumentDB?
+
+Amazon definition: "Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data."
+
+Learn more [here](https://aws.amazon.com/documentdb)
+
+
+
+What "AWS Database Migration Service" is used for?
+
+
+
+What type of storage is used by Amazon RDS?
+
+EBS
+
+
+
+Explain Amazon RDS Read Replicas
+
+AWS definition: "Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads."
+Read more about [here](https://aws.amazon.com/rds/features/read-replicas)
+
+
+#### AWS Networking
+
+
+What is VPC?
+
+"A logically isolated section of the AWS cloud where you can launch AWS resources in a virtual network that you define"
+Read more about it [here](https://aws.amazon.com/vpc).
+
+
+
+True or False? VPC spans multiple regions
+
+False
+
+
+
+True or False? Subnets belong to the same VPC, can be in different availability zones
+
+True. Just to clarify, a single subnet resides entirely in one AZ.
+
+
+
+What is an Internet Gateway?
+
+"component that allows communication between instances in your VPC and the internet" (AWS docs).
+Read more about it [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html)
+
+
+
+True or False? NACL allow or deny traffic on the subnet level
+
+True
+
+
+
+What is VPC peering?
+
+[docs.aws](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html): "A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses."
+
+
+
+True or False? Multiple Internet Gateways can be attached to one VPC
+
+False. Only one internet gateway can be attached to a single VPC.
+
+
+
+What is an Elastic IP address?
+An Elastic IP address is a reserved public IP address that you can assign to any EC2 instance in a particular region, until you choose to release it.
+When you associate an Elastic IP address with an EC2 instance, it replaces the default public IP address. If an external hostname was allocated to the instance from your launch settings, it will also replace this hostname; otherwise, it will create one for the instance. The Elastic IP address remains in place through events that normally cause the address to change, such as stopping or restarting the instance.
+
+
+
+True or False? Route Tables used to allow or deny traffic from the internet to AWS instances
+
+False.
+
+
+
+Explain Security Groups and Network ACLs
+
+* NACL - security layer on the subnet level.
+* Security Group - security layer on the instance level.
+
+Read more about it [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html) and [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)
+
+
+
+What is AWS Direct Connect?
+
+Allows you to connect your corporate network to AWS network.
+
+
+#### AWS - Identify the service or tool
+
+
+What would you use for automating code/software deployments?
+
+AWS CodeDeploy
+
+
+
+What would you use for easily creating similar AWS environments/resources for different customers?
+
+CloudFormation
+
+
+
+Using which service, can you add user sign-up, sign-in and access control to mobile and web apps?
+
+Cognito
+
+
+
+Which service would you use for building a website or web application?
+
+Lightsail
+
+
+
+Which tool would you use for choosing between Reserved instances or On-Demand instances?
+
+Cost Explorer
+
+
+
+What would you use to check how many unassociated Elastic IP address you have?
+
+Trusted Advisor
+
+
+
+Which service allows you to transfer large amounts (Petabytes) of data in and out of the AWS cloud?
+
+AWS Snowball
+
+
+
+Which service would you use if you need a data warehouse?
+
+AWS RedShift
+
+
+
+Which service provides a virtual network dedicated to your AWS account?
+
+VPC
+
+
+
+What you would use for having automated backups for an application that has MySQL database layer?
+
+Amazon Aurora
+
+
+
+What would you use to migrate on-premise database to AWS?
+
+AWS Database Migration Service (DMS)
+
+
+
+What would you use to check why certain EC2 instances were terminated?
+
+AWS CloudTrail
+
+
+
+What would you use for SQL database?
+
+AWS RDS
+
+
+
+What would you use for NoSQL database?
+
+AWS DynamoDB
+
+
+
+What would you use for adding image and video analysis to your application?
+
+AWS Rekognition
+
+
+
+Which service would you use for debugging and improving performances issues with your applications?
+
+AWS X-Ray
+
+
+
+Which service is used for sending notifications?
+
+SNS
+
+
+
+What would you use for running SQL queries interactively on S3?
+
+AWS Athena
+
+
+
+What would you use for preparing and combining data for analytics or ML?
+
+AWS Glue
+
+
+
+Which service would you use for monitoring malicious activity and unauthorized behavior in regards to AWS accounts and workloads?
+
+Amazon GuardDuty
+
+
+
+Which service would you use for centrally manage billing, control access, compliance, and security across multiple AWS accounts?
+
+AWS Organizations
+
+
+
+Which service would you use for web application protection?
+
+AWS WAF
+
+
+
+You would like to monitor some of your resources in the different services. Which service would you use for that?
+
+CloudWatch
+
+
+
+Which service would you use for performing security assessment?
+
+AWS Inspector
+
+
+
+Which service would you use for creating DNS record?
+
+Route 53
+
+
+
+What would you use if you need a fully managed document database?
+
+Amazon DocumentDB
+
+
+
+Which service would you use to add access control (or sign-up, sign-in forms) to your web/mobile apps?
+
+AWS Cognito
+
+
+
+Which service would you use if you need messaging queue?
+
+Simple Queue Service (SQS)
+
+
+
+Which service would you use if you need managed DDOS protection?
+
+AWS Shield
+
+
+
+Which service would you use if you need store frequently used data for low latency access?
+
+ElastiCache
+
+
+
+What would you use to transfer files over long distances between a client and an S3 bucket?
+
+Amazon S3 Transfer Acceleration
+
+
+
+Which service would you use for distributing incoming requests across multiple?
+
+Route 53
+
+
+
+Which services are involved in getting a custom string (based on the input) when inserting a URL in the browser?
+
+Lambda - to define a function that gets an input and returns a certain string
+API Gateway - to define the URL trigger (= when you insert the URL, the function is invoked).
+
+
+
+Which service would you use for data or events streaming?
+
+Kinesis
+
+
+#### AWS DNS
+
+
+What is Route 53?
+
+"Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service..."
+Some of Route 53 features:
+ * Register domain
+ * DNS service - domain name translations
+ * Health checks - verify your app is available
+
+More on Route 53 [here](https://aws.amazon.com/route53)
+
+
+#### AWS Monitoring & Logging
+
+
+What is AWS CloudWatch?
+
+AWS definition: "Amazon CloudWatch is a monitoring and observability service..."
+
+More on CloudWatch [here](https://aws.amazon.com/cloudwatch)
+
+
+
+What is AWS CloudTrail?
+
+AWS definition: "AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account."
+
+Read more on CloudTrail [here](https://aws.amazon.com/cloudtrail)
+
+
+
+What is Simply Notification Service?
+
+AWS definition: "a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications."
+
+Read more about it [here](https://aws.amazon.com/sns)
+
+
+
+Explain the following in regards to SNS:
+
+ - Topics
+ - Subscribers
+ - Publishers
+
+ * Topics - used for grouping multiple endpoints
+ * Subscribers - the endpoints where topics send messages to
+ * Publishers - the provider of the message (event, person, ...)
+
+
+#### AWS Billing & Support
+
+
+What is AWS Organizations?
+
+AWS definition: "AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS."
+More on Organizations [here](https://aws.amazon.com/organizations)
+
+
+
+What are Service Control Policies and to what service they belong?
+
+AWS organizations service and the definition by Amazon: "SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines."
+
+Learn more [here](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html)
+
+
+
+Explain AWS pricing model
+
+It mainly works on "pay-as-you-go" meaning you pay only for what are using and when you are using it.
+In s3 you pay for 1. How much data you are storing 2. Making requests (PUT, POST, ...)
+In EC2 it's based on the purchasing option (on-demand, spot, ...), instance type, AMI type and the region used.
+
+More on AWS pricing model [here](https://aws.amazon.com/pricing)
+
+
+
+How one should estimate AWS costs when for example comparing to on-premise solutions?
+
+* TCO calculator
+* AWS simple calculator
+* Cost Explorer
+
+
+
+What basic support in AWS includes?
+
+* 24x7 customer service
+* Trusted Advisor
+* AWS personal Health Dashoard
+
+
+
+How are EC2 instances billed?
+
+
+
+What AWS Pricing Calculator is used for?
+
+
+
+What is Amazon Connect?
+
+Amazon definition: "Amazon Connect is an easy to use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost."
+
+Learn more [here](https://aws.amazon.com/connect)
+
+
+
+What are "APN Consulting Partners"?
+
+Amazon definition: "APN Consulting Partners are professional services firms that help customers of all types and sizes design, architect, build, migrate, and manage their workloads and applications on AWS, accelerating their journey to the cloud."
+
+Learn more [here](https://aws.amazon.com/partners/consulting)
+
+
+
+Which of the following are AWS accounts types (and are sorted by order)?
+
+ - Basic, Developer, Business, Enterprise
+ - Newbie, Intermediate, Pro, Enterprise
+ - Developer, Basic, Business, Enterprise
+ - Beginner, Pro, Intermediate Enterprise
+
+
+ - Basic, Developer, Business, Enterprise
+
+
+
+True or False? Region is a factor when it comes to EC2 costs/pricing
+
+True. You pay differently based on the chosen region.
+
+
+
+What is "AWS Infrastructure Event Management"?
+
+AWS Definition: "AWS Infrastructure Event Management is a structured program available to Enterprise Support customers (and Business Support customers for an additional fee) that helps you plan for large-scale events such as product or application launches, infrastructure migrations, and marketing events."
+
+
+#### AWS Automation
+
+
+What is AWS CodeDeploy?
+
+Amazon definition: "AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers."
+
+Learn more [here](https://aws.amazon.com/codedeploy)
+
+
+
+Explain what is CloudFormation
+
+
+#### AWS - Misc
+
+
+Which AWS service you have experience with that you think is not very common?
+
+
+
+What is AWS CloudSearch?
+
+
+
+What is AWS Lightsail?
+
+AWS definition: "Lightsail is an easy-to-use cloud platform that offers you everything needed to build an application or website, plus a cost-effective, monthly plan."
+
+
+
+What is AWS Rekognition?
+
+AWS definition: "Amazon Rekognition makes it easy to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise to use."
+
+Learn more [here](https://aws.amazon.com/rekognition)
+
+
+
+What AWS Resource Groups used for?
+
+Amazon definition: "You can use resource groups to organize your AWS resources. Resource groups make it easier to manage and automate tasks on large numbers of resources at one time. "
+
+Learn more [here](https://docs.aws.amazon.com/ARG/latest/userguide/welcome.html)
+
+
+
+What is AWS Global Accelerator?
+
+Amazon definition: "AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users..."
+
+Learn more [here](https://aws.amazon.com/global-accelerator)
+
+
+
+What is AWS Config?
+
+Amazon definition: "AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources."
+
+Learn more [here](https://aws.amazon.com/config)
+
+
+
+What is AWS X-Ray?
+
+AWS definition: "AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture."
+Learn more [here](https://aws.amazon.com/xray)
+
+
+
+What is AWS OpsWorks?
+
+Amazon definition: "AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet."
+
+Learn more about it [here](https://aws.amazon.com/opsworks)
+
+
+
+What is AWS Athena?
+
+"Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL."
+
+Learn more about AWS Athena [here](https://aws.amazon.com/athena)
+
+
+
+What is Amazon Cloud Directory?
+
+Amazon definition: "Amazon Cloud Directory is a highly available multi-tenant directory-based store in AWS. These directories scale automatically to hundreds of millions of objects as needed for applications."
+
+Learn more [here](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/what_is_cloud_directory.html)
+
+
+
+What is AWS Elastic Beanstalk?
+
+AWS definition: "AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services...You can simply upload your code and Elastic Beanstalk automatically handles the deployment"
+
+Learn more about it [here](https://aws.amazon.com/elasticbeanstalk)
+
+
+
+What is AWS SWF?
+
+Amazon definition: "Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud."
+
+Learn more on Amazon Simple Workflow Service [here](https://aws.amazon.com/swf)
+
+
+
+What is AWS EMR?
+
+AWS definition: "big data platform for processing vast amounts of data using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto."
+
+Learn more [here](https://aws.amazon.com/emr)
+
+
+
+What is AWS Quick Starts?
+
+AWS definition: "Quick Starts are built by AWS solutions architects and partners to help you deploy popular technologies on AWS, based on AWS best practices for security and high availability."
+
+Read more [here](https://aws.amazon.com/quickstart)
+
+
+
+What is the Trusted Advisor?
+
+
+
+What is AWS Service Catalog?
+
+Amazon definition: "AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS."
+
+Learn more [here](https://aws.amazon.com/servicecatalog)
+
+
+
+What is AWS CAF?
+
+Amazon definition: "AWS Professional Services created the AWS Cloud Adoption Framework (AWS CAF) to help organizations design and travel an accelerated path to successful cloud adoption. "
+
+Learn more [here](https://aws.amazon.com/professional-services/CAF)
+
+
+
+What is AWS Cloud9?
+
+AWS: "AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser"
+
+
+
+What is AWS CloudShell?
+
+AWS: "AWS CloudShell is a browser-based shell that makes it easy to securely manage, explore, and interact with your AWS resources."
+
+
+
+What is AWS Application Discovery Service?
+
+Amazon definition: "AWS Application Discovery Service helps enterprise customers plan migration projects by gathering information about their on-premises data centers."
+
+Learn more [here](https://aws.amazon.com/application-discovery)
+
+
+
+What is the AWS well-architected framework and what pillars it's based on?
+
+AWS definition: "The Well-Architected Framework has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization"
+
+Learn more [here](https://aws.amazon.com/architecture/well-architected)
+
+
+
+What AWS services are serverless (or have the option to be serverless)?
+
+AWS Lambda
+AWS Athena
+
+
+
+What is Simple Queue Service (SQS)?
+
+AWS definition: "Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications".
+
+Learn more about it [here](https://aws.amazon.com/sqs)
+
+
diff --git a/exercises/cicd/README.md b/exercises/cicd/README.md
new file mode 100644
index 0000000..250b3cc
--- /dev/null
+++ b/exercises/cicd/README.md
@@ -0,0 +1,305 @@
+## CI/CD
+
+### CI/CD Exercises
+
+|Name|Topic|Objective & Instructions|Solution|Comments|
+|--------|--------|------|----|----|
+| Set up a CI pipeline | CI | [Exercise](ci_for_open_source_project.md) | | |
+| Deploy to Kubernetes | Deployment | [Exercise](deploy_to_kubernetes.md) | [Solution](solutions/deploy_to_kubernetes/README.md) | |
+| Jenkins - Remove Jobs | Jenkins Scripts | [Exercise](remove_jobs.md) | [Solution](solutions/remove_jobs_solution.groovy) | |
+| Jenkins - Remove Builds | Jenkins Sripts | [Exercise](remove_builds.md) | [Solution](solutions/remove_builds_solution.groovy) | |
+
+### CI/CD Self Assessment
+
+
+What is Continuous Integration?
+
+A development practice where developers integrate code into a shared repository frequently. It can range from a couple of changes every day or a week to a couple of changes in one hour in larger scales.
+
+Each piece of code (change/patch) is verified, to make the change is safe to merge. Today, it's a common practice to test the change using an automated build that makes sure the code can integrated. It can be one build which runs several tests in different levels (unit, functional, etc.) or several separate builds that all or some has to pass in order for the change to be merged into the repository.
+
+
+
+What is Continuous Deployment?
+
+A development strategy used by developers to release software automatically into production where any code commit must pass through an automated testing phase. Only when this is successful is the release considered production worthy. This eliminates any human interaction and should be implemented only after production-ready pipelines have been set with real-time monitoring and reporting of deployed assets. If any issues are detected in production it should be easy to rollback to previous working state.
+
+For more info please read [here](https://www.atlassian.com/continuous-delivery/continuous-deployment)
+
+
+
+Can you describe an example of a CI (and/or CD) process starting the moment a developer submitted a change/PR to a repository?
+
+There are many answers for such a question, as CI processes vary, depending on the technologies used and the type of the project to where the change was submitted.
+Such processes can include one or more of the following stages:
+
+* Compile
+* Build
+* Install
+* Configure
+* Update
+* Test
+
+An example of one possible answer:
+
+A developer submitted a pull request to a project. The PR (pull request) triggered two jobs (or one combined job). One job for running lint test on the change and the second job for building a package which includes the submitted change, and running multiple api/scenario tests using that package. Once all tests passed and the change was approved by a maintainer/core, it's merged/pushed to the repository. If some of the tests failed, the change will not be allowed to merged/pushed to the repository.
+
+A complete different answer or CI process, can describe how a developer pushes code to a repository, a workflow then triggered to build a container image and push it to the registry. Once in the registry, the k8s cluster is applied with the new changes.
+
+
+
+What is Continuous Delivery?
+
+A development strategy used to frequently deliver code to QA and Ops for testing. This entails having a staging area that has production like features where changes can only be accepted for production after a manual review. Because of this human entanglement there is usually a time lag between release and review making it slower and error prone as compared to continous deployment.
+
+For more info please read [here](https://www.atlassian.com/continuous-delivery/continuous-deployment)
+
+
+
+What is difference between Continuous Delivery and Continuous Deployment?
+
+Both encapsulate the same process of deploying the changes which were compiled and/or tested in the CI pipelines.
+The difference between the two is that Continuous Delivery isn't fully automated process as opposed to Continuous Deployment where every change that is tested in the process is eventually deployed to production. In continuous delivery someone is either approving the deployment process or the deployment process is based on constraints and conditions (like time constraint of deploying every week/month/...)
+
+
+
+What CI/CD best practices are you familiar with? Or what do you consider as CI/CD best practice?
+
+* Commit and test often.
+* Testing/Staging environment should be a clone of production environment.
+* Clean up your environments (e.g. your CI/CD pipelines may create a lot of resources. They should also take care of cleaning up everything they create)
+* The CI/CD pipelines should provide the same results when executed locally or remotely
+* Treat CI/CD as another application in your organization. Not as a glue code.
+* On demand environments instead of pre-allocated resources for CI/CD purposes
+* Stages/Steps/Tasks of pipelines should be shared between applications or microservices (don't re-invent common tasks like "cloning a project")
+
+
+
+You are given a pipeline and a pool with 3 workers: virtual machine, baremetal and a container. How will you decide on which one of them to run the pipeline?
+
+
+
+Where do you store CI/CD pipelines? Why?
+
+There are multiple approaches as to where to store the CI/CD pipeline definitions:
+
+1. App Repository - store them in the same repository of the application they are building or testing (perhaps the most popular one)
+2. Central Repository - store all organization's/project's CI/CD pipelines in one separate repository (perhaps the best approach when multiple teams test the same set of projects and they end up having many pipelines)
+3. CI repo for every app repo - you separate CI related code from app code but you don't put everything in one place (perhaps the worst option due to the maintenance)
+4. The platform where the CI/CD pipelines are being executed (e.g. Kubernetes Cluster in case of Tekton/OpenShift Pipelines).
+
+
+
+How do you perform plan capacity for your CI/CD resources? (e.g. servers, storage, etc.)
+
+
+
+How would you structure/implement CD for an application which depends on several other applications?
+
+
+
+How do you measure your CI/CD quality? Are there any metrics or KPIs you are using for measuring the quality?
+
+
+#### CI/CD - Jenkins
+
+
+What is Jenkins? What have you used it for?
+
+Jenkins is an open source automation tool written in Java with plugins built for Continuous Integration purpose. Jenkins is used to build and test your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.
+
+Jenkins integrates development life-cycle processes of all kinds, including build, document, test, package, stage, deploy, static analysis and much more.
+
+
+
+
+What are the advantages of Jenkins over its competitors? Can you compare it to one of the following systems?
+
+ * Travis
+ * Bamboo
+ * Teamcity
+ * CircleCI
+
+
+
+What are the limitations or disadvantages of Jenkins?
+
+This might be considered to be an opinionated answer:
+
+* Old fashioned dashboards with not many options to customize it
+* Containers readiness (this has improved with Jenkins X)
+* By itself, it doesn't have many features. On the other hand, there many plugins created by the community to expand its abilities
+* Managing Jenkins and its piplines as a code can be one hell of a nightmare
+
+
+
+Explain the following:
+
+- Job
+- Build
+- Plugin
+- Node or Worker
+- Executor
+- Job is an automation definition = what and where to execute once the user clicks on "build"
+- Build is a running instance of a job. You can have one or more builds at any given point of time (unless limited by confiugration)
+- A worker is the machine/instance on which the build is running. When a build starts, it "acquires" a worker out of a pool to run on it.
+- An executor is variable of the worker, defining how many builds can run on that worker in parallel. An executor value of 3 means, that 3 builds can run at any point on that executor (not necessarily of the same job. Any builds)
+
+
+
+What plugins have you used in Jenkins?
+
+
+
+Have you used Jenkins for CI or CD processes? Can you describe them?
+
+
+
+What type of jobs are there? Which types have you used?
+
+
+
+How did you report build results to users? What ways are there to report the results?
+
+You can report via:
+ * Emails
+ * Messaging apps
+ * Dashboards
+
+Each has its own disadvantages and advantages. Emails for example, if sent too often, can be eventually disregarded or ignored.
+
+
+
+You need to run unit tests every time a change submitted to a given project. Describe in details how your pipeline would look like and what will be executed in each stage
+
+The pipelines will have multiple stages:
+
+ * Clone the project
+ * Install test dependencies (for example, if I need tox package to run the tests, I will install it in this stage)
+ * Run unit tests
+ * (Optional) report results (For example an email to the users)
+ * Archive the relevant logs/files
+
+
+
+How to secure Jenkins?
+
+ [Jenkins documentation](https://www.jenkins.io/doc/book/security/securing-jenkins/) provides some basic intro for securing your Jenkins server.
+
+
+
+Describe how do you add new nodes (agents) to Jenkins
+
+You can describe the UI way to add new nodes but better to explain how to do in a way that scales like a script or using dynamic source for nodes like one of the existing clouds.
+
+
+
+How to acquire multiple nodes for one specific build?
+
+
+
+Whenever a build fails, you would like to notify the team owning the job regarding the failure and provide failure reason. How would you do that?
+
+
+
+There are four teams in your organization. How to prioritize the builds of each team? So the jobs of team x will always run before team y for example
+
+
+
+If you are managing a dozen of jobs, you can probably use the Jenkins UI. But how do you manage the creation and deletion of hundreds of jobs every week/month?
+
+
+
+What are some of Jenkins limitations?
+
+ * Testing cross-dependencies (changes from multiple projects together)
+ * Starting builds from any stage (although Cloudbees implemented something called checkpoints)
+
+
+
+What is the different between a scripted pipeline to declarative pipeline? Which type are you using?
+
+
+
+How would you implement an option of a starting a build from a certain stage and not from the beginning?
+
+
+
+Do you have experience with developing a Jenkins plugin? Can you describe this experience?
+
+
+
+Have you written Jenkins scripts? If yes, what for and how they work?
+
+
+#### CI/CD - GitHub Actions
+
+
+What is a Workflow in GitHub Actions?
+
+A YAML file that defines the automation actions and instructions to execute upon a specific event.
+The file is placed in the repository itself.
+
+A Workflow can be anything - running tests, compiling code, building packages, ...
+
+
+
+What is a Runner in GitHub Actions?
+
+A workflow has to be executed somewhere. The environment where the workflow is executed is called Runner.
+A Runner can be an on-premise host or GitHub hoste
+
+
+
+What is a Job in GitHub Actions?
+
+A job is a series of steps which are executed on the same runner/environment.
+A workflow must include at least one job.
+
+
+
+What is an Action in GitHub Actions?
+
+An action is the smallest unit in a workflow. It includes the commands to execute as part of the job.
+
+
+
+In GitHub Actions workflow, what the 'on' attribute/directive is used for?
+
+Specify upon which events the workflow will be triggered.
+For example, you might configure the workflow to trigger every time a changed is pushed to the repository.
+
+
+
+True or False? In Github Actions, jobs are executed in parallel by deafult
+
+True
+
+
+
+How to create dependencies between jobs so one job runs after another?
+
+Using the "needs" attribute/directive.
+
+```
+jobs:
+ job1:
+ job2:
+ needs: job1
+```
+
+In the above example, job1 must complete successfully before job2 runs
+
+
+
+How to add a Workflow to a repository?
+CLI:
+
+1. Create the directory `.github/workflows` in the repository
+2. Add a YAML file
+
+UI:
+
+1. In the repository page, click on "Actions"
+2. Choose workflow and click on "Set up this workflow"
+
diff --git a/exercises/devops/deploy_to_kubernetes.md b/exercises/cicd/deploy_to_kubernetes.md
similarity index 100%
rename from exercises/devops/deploy_to_kubernetes.md
rename to exercises/cicd/deploy_to_kubernetes.md
diff --git a/exercises/devops/solutions/deploy_to_kubernetes/Jenkinsfile b/exercises/cicd/solutions/deploy_to_kubernetes/Jenkinsfile
similarity index 100%
rename from exercises/devops/solutions/deploy_to_kubernetes/Jenkinsfile
rename to exercises/cicd/solutions/deploy_to_kubernetes/Jenkinsfile
diff --git a/exercises/devops/solutions/deploy_to_kubernetes/README.md b/exercises/cicd/solutions/deploy_to_kubernetes/README.md
similarity index 100%
rename from exercises/devops/solutions/deploy_to_kubernetes/README.md
rename to exercises/cicd/solutions/deploy_to_kubernetes/README.md
diff --git a/exercises/devops/solutions/deploy_to_kubernetes/deploy.yml b/exercises/cicd/solutions/deploy_to_kubernetes/deploy.yml
similarity index 100%
rename from exercises/devops/solutions/deploy_to_kubernetes/deploy.yml
rename to exercises/cicd/solutions/deploy_to_kubernetes/deploy.yml
diff --git a/exercises/devops/solutions/deploy_to_kubernetes/helloworld.yml b/exercises/cicd/solutions/deploy_to_kubernetes/helloworld.yml
similarity index 100%
rename from exercises/devops/solutions/deploy_to_kubernetes/helloworld.yml
rename to exercises/cicd/solutions/deploy_to_kubernetes/helloworld.yml
diff --git a/exercises/devops/solutions/deploy_to_kubernetes/html/css/normalize.css b/exercises/cicd/solutions/deploy_to_kubernetes/html/css/normalize.css
similarity index 100%
rename from exercises/devops/solutions/deploy_to_kubernetes/html/css/normalize.css
rename to exercises/cicd/solutions/deploy_to_kubernetes/html/css/normalize.css
diff --git a/exercises/devops/solutions/deploy_to_kubernetes/html/css/skeleton.css b/exercises/cicd/solutions/deploy_to_kubernetes/html/css/skeleton.css
similarity index 100%
rename from exercises/devops/solutions/deploy_to_kubernetes/html/css/skeleton.css
rename to exercises/cicd/solutions/deploy_to_kubernetes/html/css/skeleton.css
diff --git a/exercises/devops/solutions/deploy_to_kubernetes/html/images/favicon.png b/exercises/cicd/solutions/deploy_to_kubernetes/html/images/favicon.png
similarity index 100%
rename from exercises/devops/solutions/deploy_to_kubernetes/html/images/favicon.png
rename to exercises/cicd/solutions/deploy_to_kubernetes/html/images/favicon.png
diff --git a/exercises/devops/solutions/deploy_to_kubernetes/html/index.html b/exercises/cicd/solutions/deploy_to_kubernetes/html/index.html
similarity index 100%
rename from exercises/devops/solutions/deploy_to_kubernetes/html/index.html
rename to exercises/cicd/solutions/deploy_to_kubernetes/html/index.html
diff --git a/exercises/devops/solutions/deploy_to_kubernetes/inventory b/exercises/cicd/solutions/deploy_to_kubernetes/inventory
similarity index 100%
rename from exercises/devops/solutions/deploy_to_kubernetes/inventory
rename to exercises/cicd/solutions/deploy_to_kubernetes/inventory
diff --git a/exercises/cloud/README.md b/exercises/cloud/README.md
new file mode 100644
index 0000000..c56cca6
--- /dev/null
+++ b/exercises/cloud/README.md
@@ -0,0 +1,108 @@
+## Cloud
+
+
+What is Cloud Computing? What is a Cloud Provider?
+
+Cloud computing refers to the delivery of on-demand computing services
+over the internet on a pay-as-you-go basis.
+
+In simple words, Cloud computing is a service that lets you use any computing
+service such as a server, storage, networking, databases, and intelligence,
+right through your browser without owning anything. You can do anything you
+can think of unless it doesn’t require you to stay close to your hardware.
+
+Cloud service providers are companies that establish public clouds, manage private clouds, or offer on-demand cloud computing components (also known as cloud computing services) like Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service(SaaS). Cloud services can reduce business process costs when compared to on-premise IT.
+
+
+
+What are the advantages of cloud computing? Mention at least 3 advantages
+
+* Pay as you go: you are paying only for what you are using. No upfront payments and payment stops when resources are no longer used.
+* Scalable: resources are scaled down or up based on demand
+* High availability: resources and applications provide seamless experience, even when some services are down
+* Disaster recovery
+
+
+
+True or False? Cloud computing is a consumption-based model (users only pay for for resources they use)
+
+True
+
+
+
+What types of Cloud Computing services are there?
+
+IAAS - Infrastructure as a Service
+PAAS - Platform as a Service
+SAAS - Software as a Service
+
+
+
+Explain each of the following and give an example:
+
+ * IAAS
+ * PAAS
+ * SAAS
+ * IAAS - Users have control over complete Operating System and don't need to worry about the physical resources, which is managed by Cloud Service Provider.
+ * PAAS - CLoud Service Provider takes care of Operating System, Middlewares and users only need to focus on our Data and Application.
+ * SAAS - A cloud based method to provide software to users, software logics running on cloud, can be run on-premises or managed by Cloud Service Provider.
+
+
+
+What types of clouds (or cloud deployments) are there?
+
+ * Public - Cloud services sharing computing resources among multiple customers
+ * Private - Cloud services having computing resources limited to specific customer or organization, managed by third party or organizations itself
+ * Hybrid - Combination of public and private clouds
+
+
+
+What are the differences between Cloud Providers and On-Premise solution?
+
+In cloud providers, someone else owns and manages the hardware, hire the relevant infrastructure teams and pays for real-estate (for both hardware and people). You can focus on your business.
+
+In On-Premise solution, it's quite the opposite. You need to take care of hardware, infrastructure teams and pay for everything which can be quite expensive. On the other hand it's tailored to your needs.
+
+
+
+What is Serverless Computing?
+
+The main idea behind serverless computing is that you don't need to manage the creation and configuration of server. All you need to focus on is splitting your app into multiple functions which will be triggered by some actions.
+
+It's important to note that:
+
+* Serverless Computing is still using servers. So saying there are no servers in serverless computing is completely wrong
+* Serverless Computing allows you to have a different paying model. You basically pay only when your functions are running and not when the VM or containers are running as in other payment models
+
+
+
+Can we replace any type of computing on servers with serverless?
+
+
+
+Is there a difference between managed service to SaaS or is it the same thing?
+
+
+
+What is auto scaling?
+
+AWS definition: "AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost"
+
+Read more about auto scaling [here](https://aws.amazon.com/autoscaling)
+
+
+
+True or False? Auto Scaling is about adding resources (such as instances) and not about removing resource
+
+False. Auto scaling adjusts capacity and this can mean removing some resources based on usage and performances.
+
+
+#### Cloud - Security
+
+
+How to secure instances in the cloud?
+
+ * Instance should have minimal permissions needed. You don't want an instance-level incident to become an account-level incident
+ * Instances should be accessed through load balancers or bastion hosts. In other words, they should be off the internet (in a private subnet behind a NAT).
+ * Using latest OS images with your instances (or at least apply latest patches)
+
diff --git a/exercises/containers/README.md b/exercises/containers/README.md
new file mode 100644
index 0000000..3b40739
--- /dev/null
+++ b/exercises/containers/README.md
@@ -0,0 +1,887 @@
+## Containers
+
+### Containers Exercises
+
+|Name|Topic|Objective & Instructions|Solution|Comments|
+|--------|--------|------|----|----|
+|Running Containers|Intro|[Exercise](running_containers.md)|[Solution](solutions/running_containers.md)
+|Working with Images|Image|[Exercise](working_with_images.md)|[Solution](solutions/working_with_images.md)
+|My First Dockerfile|Dockerfile|[Exercise](write_dockerfile_run_container.md)|
+|Run, Forest, Run!|Restart Policies|[Exercise](run_forest_run.md)|[Solution](solutions/run_forest_run.md)
+|Layer by Layer|Image Layers|[Exercise](image_layers.md)|[Solution](solutions/image_layers.md)
+|Containerize an application | Containerization |[Exercise](containerize_app.md)|[Solution](solutions/containerize_app.md)
+|Multi-Stage Builds|Multi-Stage Builds|[Exercise](multi_stage_builds.md)|[Solution](solutions/multi_stage_builds.md)
+
+### Containers Self Assesment
+
+
+What is a Container?
+
+This can be tricky to answer since there are many ways to create a containers:
+
+ - Docker
+ - systemd-nspawn
+ - LXC
+
+If to focus on OCI (Open Container Initiative) based containers, it offers the following [definition](https://github.com/opencontainers/runtime-spec/blob/master/glossary.md#container): "An environment for executing processes with configurable isolation and resource limitations. For example, namespaces, resource limits, and mounts are all part of the container environment."
+
+
+
+Why containers are needed? What is their goal?
+
+OCI provides a good [explanation](https://github.com/opencontainers/runtime-spec/blob/master/principles.md#the-5-principles-of-standard-containers): "Define a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in a format that is self-describing and portable, so that any compliant runtime can run it without extra dependencies, regardless of the underlying machine and the contents of the container."
+
+
+
+How are containers different from virtual machines (VMs)?
+
+The primary difference between containers and VMs is that containers allow you to virtualize
+multiple workloads on a single operating system while in the case of VMs, the hardware is being virtualized to run multiple machines each with its own guest OS.
+You can also think about it as containers are for OS-level virtualization while VMs are for hardware virtualization.
+
+* Containers don't require an entire guest operating system as VMs. Containers share the system's kernel as opposed to VMs. They isolate themselves via the use of kernel's features such as namespaces and cgroups
+* It usually takes a few seconds to set up a container as opposed to VMs which can take minutes or at least more time than containers as there is an entire OS to boot and initialize as opposed to containers which has share of the underlying OS
+* Virtual machines considered to be more secured than containers
+* VMs portability considered to be limited when compared to containers
+
+
+
+Do we need virtual machines in the edge of containers? Are they still relevant?
+
+
+
+In which scenarios would you use containers and in which you would prefer to use VMs?
+
+You should choose VMs when:
+ * You need run an application which requires all the resources and functionalities of an OS
+ * You need full isolation and security
+
+You should choose containers when:
+ * You need a lightweight solution
+ * Running multiple versions or instances of a single application
+
+
+
+Describe the process of containerizing an application
+
+1. Write a Dockerfile that includes your app (including the commands to run it) and its dependencies
+2. Build the image using the Dockefile you wrote
+3. You might want to push the image to a registry
+4. Run the container using the image you've built
+
+
+#### Containers - OCI
+
+
+What is the OCI?
+
+OCI (Open Container Initiative) is an open governance established in 2015 to standardize container creation - mostly image format and runtime. At that time there were a number of parties involved and the most prominent one was Docker.
+
+Specifications published by OCI:
+
+ - [image-spec](https://github.com/opencontainers/image-spec)
+ - [runtime-spec](https://github.com/opencontainers/runtime-spec)
+
+
+
+Which operations OCI based containers must support?
+
+Create, Kill, Delete, Start and Query State.
+
+
+#### Containers - Basic Commands
+
+
+How to list all the containers on a given host?
+
+In the case of Docker, use: `docker container ls`
+In the case of Podman, it's not very different: `podman container ls`
+
+
+
+How to run a container?
+
+Docker: `docker container run ubuntu`
+Podman: `podman container run ubuntu`
+
+
+
+Why after running podman container run ubuntu
the output of podman container ls
is empty?
+
+Because the container immediately exits after running the ubuntu image. This is completely normal and expected as containers designed to run a service or a app and exit when they are done running it.
+
+If you want the container to keep running, you can run a command like `sleep 100` which will run for 100 seconds or you can attach to terminal of the container with a command similar: `podman container run -it ubuntu /bin/bash`
+
+
+
+How to attach your shell to a terminal of a running container?
+
+`podman container exec -it [container id/name] bash`
+
+This can be done in advance while running the container: `podman container run -it [image:tag] /bin/bash`
+
+
+
+True or False? You can remove a running container if it doesn't running anything
+
+False. You have to stop the container before removing it.
+
+
+
+How to stop and remove a container?
+
+`podman container stop && podman container rm `
+
+
+
+What happens when you run docker container run ubuntu
?
+
+1. Docker client posts the command to the API server running as part of the Docker daemon
+2. Docker daemon checks if a local image exists
+ 1. If it exists, it will use it
+ 2. If doesn't exists, it will go to the remote registry (Docker Hub by default) and pull the image locally
+3. containerd and runc are instructed (by the daemon) to create and start the container
+
+
+
+How to run a container in the background?
+
+With the -d flag. It will run in the background and will not attach it to the terminal.
+
+`docker container run -d httpd` or `podman container run -d httpd`
+
+
+#### Containers - Images
+
+
+What is a container image?
+
+* An image of a container contains the application, its dependencies and the operating system where the application is executed.
+* It's a collection of read-only layers. These layers are loosely coupled
+ * Each layer is assembled out of one or more files
+
+
+
+Why container images are relatively small?
+
+* Most of the images don't contain Kernel. They share and access the one used by the host on which they are running
+* Containers intended to run specific application in most cases. This means they hold only what the application needs in order to run
+
+
+
+How to list the container images on certain host?
+
+`podman image ls`
+`docker image ls`
+
+Depends on which containers engine you use.
+
+
+
+How the centralized location, where images are stored, is called?
+
+Registry
+
+
+
+A registry contains one or more ____
which in turn contain one or more ____
+
+A registry contains one or more repositories which in turn contain one or more images.
+
+
+
+How to find out which registry do you use by default from your environment?
+
+Depends on the containers technology you are using. For example, in case of Docker, it can be done with `docker info`
+
+```
+> docker info
+Registry: https://index.docker.io/v1
+```
+
+
+
+How to retrieve the latest ubuntu image?
+
+`docker image pull ubuntu:latest`
+
+
+
+True or False? It's not possible to remove an image if a certain container is using it
+
+True. You should stop and remove the container before trying to remove the image it uses.
+
+
+
+True or False? If a tag isn't specified when pulling an image, the 'latest' tag is being used
+
+True
+
+
+
+True or False? Using the 'latest' tag when pulling an image means, you are pulling the most recently published image
+
+False. While this might be true in some cases, it's not guaranteed that you'll pull the latest published image when using the 'latest' tag.
+For example, in some images, 'edge' tag is used for the most recently published images.
+
+
+
+Where pulled images are stored?
+
+Depends on the container technology being used. For example, in case of Docker, images are stored in `/var/lib/docker/`
+
+
+
+Explain container image layers
+
+ - The layers of an image is where all the content is stored - code, files, etc.
+ - Each layer is independent
+ - Each layer has an ID that is an hash based on its content
+ - The layers (as the image) are immutable which means a change to one of the layers can be easily identified
+
+
+
+True or False? Changing the content of any of the image layers will cause the hash content of the image to change
+
+True. These hashes are content based and since images (and their layers) are immutable, any change will cause the hashes to change.
+
+
+
+How to list the layers of an image?
+
+In case of Docker, you can use `docker image inspect `
+
+
+
+True or False? In most cases, container images contain their own kernel
+
+False. They share and access the one used by the host on which they are running.
+
+
+
+True or False? A single container image can have multiple tags
+
+True. When listing images, you might be able to see two images with the same ID but different tags.
+
+
+
+What is a dangling image?
+
+It's an image without tags attached to it.
+One way to reach this situation is by building an image with exact same name and tag as another already existing image. It can be still referenced by using its full SHA.
+
+
+
+How to see changes done to a given image over time?
+
+In the case of Docker, you could use `docker history `
+
+
+
+True or False? Multiple images can share layers
+
+True.
+One evidence for that can be found in pulling images. Sometimes when you pull an image, you'll see a line similar to the following:
+`fa20momervif17: already exists`
+
+This is because it recognizes such layer already exists on the host, so there is no need to pull the same layer twice.
+
+
+
+What is the digest of an image? What problem does it solves?
+
+Tags are mutable. This is mean that we can have two different images with the same name and the same tag. It can be very confusing to see two images with the same name and the same tag in your environment. How would you know if they are truly the same or are they different?
+
+This is where "digests` come handy. A digest is a content-addressable identifier. It isn't mutable as tags. Its value is predictable and this is how you can tell if two images are the same content wise and not merely by looking at the name and the tag of the images.
+
+
+
+True or False? A single image can support multiple architectures (Linux x64, Windows x64, ...)
+
+True.
+
+
+
+What is a distribution hash in regards to layers?
+
+ - Layers are compressed when pushed or pulled
+ - distribution hash is the hash of the compressed layer
+ - the distribution hash used when pulling or pushing images for verification (making sure no one tempered with image or layers)
+ - It's also used for avoiding ID collisions (a case where two images have exactly the same generated ID)
+
+
+
+How multi-architecture images work? Explain by describing what happens when an image is pulled
+
+1. A client makes a call to the registry to use a specific image (using an image name and optionally a tag)
+2. A manifest list is parsed (assuming it exists) to check if the architecture of the client is supported and available as a manifest
+3. If it is supported (a manifest for the architecture is available) the relevant manifest is parsed to obtain the IDs of the layers
+4. Each layer is then pulled using the obtained IDs from the previous step
+
+
+
+How to check which architectures a certain container image supports?
+
+`docker manifest inspect `
+
+
+
+How to check what a certain container image will execute once we'll run a container based on that image?
+
+Look for "Cmd" or "Entrypoint" fields in the output of `docker image inspec `
+
+
+
+How to view the instructions that were used to build image?
+
+`docker image history :`
+
+
+
+How docker image build
works?
+
+1. Docker spins up a temporary container
+2. Runs a single instruction in the temporary container
+3. Stores the result as a new image layer
+4. Remove the temporary container
+5. Repeat for every instruction
+
+
+
+What is the role of cache in image builds?
+
+When you build an image for the first time, the different layers are being cached. So, while the first build of the image might take time, any other build of the same image (given that Dockerfile didn't change or the content used by the instructions) will be instant thanks to the caching mechanism used.
+
+In little bit more details, it works this way:
+1. The first instruction (FROM) will check if base image already exists on the host before pulling it
+2. For the next instruction, it will check in the build cache if an existing layer was built from the same base image + if it used the same instruction
+ 1. If it finds such layer, it skips the instruction and links the existing layer and it keeps using the cache.
+ 2. If it doesn't find a matching layer, it builds the layer and the cache is invalidated.
+
+Note: in some cases (like COPY and ADD instructions) the instruction might stay the same but if the content of what being copied is changed then the cache is invalidated. The way this check is done is by comparing the checksum of each file that is being copied.
+
+
+
+What ways are there to reduce container images size?
+
+ * Reduce number of instructions - in some case you may be able to join layers by installing multiple packages with one instructions for example or using `&&` to concatenate RUN instructions
+ * Using smaller images - in some cases you might be using images that contain more than what is needed for your application to run. It is good to get overview of some images and see whether you can use smaller images that you are usually using.
+ * Cleanup after running commands - some commands, like packages installation, create some metadata or cache that you might not need for running the application. It's important to clean up after such commands to reduce the image size
+ * For Docker images, you can use multi-stage builds
+
+
+
+What are the pros and cons of squashing images?
+
+Pros:
+ * Smaller image
+ * Reducing number of layers (especially if the image has lot of layers)
+Cons:
+ * No sharing of the image layers
+ * Push and pull can take more time (because no matching layers found on target)
+
+
+#### Containers - Volume
+
+
+How to create a new volume?
+
+`docker volume create some_volume`
+
+
+#### Containers - Dockerfile
+
+
+What is a Dockerfile?
+
+Different container engines (e.g. Docker, Podman) can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text file that contains all the instructions for building an image which containers can use.
+
+
+
+What is the instruction in all Dockefiles and what does it mean?
+
+The first instruction is `FROM `
+It specifies the base layer of the image to be used. Every other instruction is a layer on top of that base image.
+
+
+
+List five different instructions that are available for use in a Dockerfile
+
+ * WORKDIR: sets the working directory inside the image filesystems for all the instructions following it
+ * EXPOSE: exposes the specified port (it doesn't adds a new layer, rather documented as image metadata)
+ * ENTRYPOINT: specifies the startup commands to run when a container is started from the image
+ * ENV: sets an environment variable to the given value
+ * USER: sets the user (and optionally the user group) to use while running the image
+
+
+
+What are some of the best practices regarding container images and Dockerfiles that you are following?
+
+ * Include only the packages you are going to use. Nothing else.
+ * Specify a tag in FROM instruction. Not using a tag means you'll always pull the latest, which changes over time and might result in unexpected result.
+ * Do not use environment variables to share secrets
+ * Use images from official repositories
+ * Keep images small! - you want them only to include what is required for the application to run successfully. Nothing else.
+ * If are using the apt package manager, you might use 'no-install-recommends' with `apt-get install` to install only main dependencies (instead of suggested, recommended packages)
+
+
+
+What is the "build context"?
+
+[Docker docs](https://docs.docker.com/engine/reference/commandline/build): "A build’s context is the set of files located in the specified PATH or URL"
+
+
+
+What is the difference between ADD and COPY in Dockerfile?
+
+COPY takes in a source and destination. It lets you copy in a file or directory from the build context into the Docker image itself.
+ADD lets you do the same, but it also supports two other sources. You can use a URL instead of a file or directory from the build context. In addition, you can extract a tar file from the source directly into the destination.
+
+Although ADD and COPY are functionally similar, generally speaking, COPY is preferred. That’s because it’s more transparent than ADD. COPY only supports the basic copying of files from build context into the container, while ADD has some features (like local-only tar extraction and remote URL support) that are not immediately obvious.
+
+
+
+What is the difference between CMD and RUN in Dockerfile?
+
+RUN lets you execute commands inside of your Docker image. These commands get executed once at build time and get written into your Docker image as a new layer.
+CMD is the command the container executes by default when you launch the built image. A Dockerfile can only have one CMD.
+You could say that CMD is a Docker run-time operation, meaning it’s not something that gets executed at build time. It happens when you run an image. A running image is called a container.
+
+
+
+How to create a new image using a Dockerfile?
+
+The following command is executed from within the directory where Dockefile resides:
+
+`docker image build -t some_app:latest .`
+`podman image build -t some_app:latest .`
+
+
+
+Do you perform any checks or testing on your Dockerfiles?
+
+One option is to use [hadolint](https://github.com/hadolint/hadolint) project which is a linter based on Dockerfile best practices.
+
+
+
+Which instructions in Dockerfile create new layers?
+
+Instructions such as FROM, COPY and RUN, create new image layers instead of just adding metadata.
+
+
+
+Which instructions in Dockerfile create image metadata and don't create new layers?
+
+Instructions such as ENTRYPOINT, ENV, EXPOSE, create image metadata and they don't create new layers.
+
+
+
+Is it possible to identify which instruction create a new layer from the output of docker image history
?
+
+
+#### Containers - Architecture
+
+
+How container achieve isolation from the rest of the system?
+
+Through the use of namespaces and cgroups. Linux kernel has several types of namespaces:
+
+ - Process ID namespaces: these namespaces include independent set of process IDs
+ - Mount namespaces: Isolation and control of mountpoints
+ - Network namespaces: Isolates system networking resources such as routing table, interfaces, ARP table, etc.
+ - UTS namespaces: Isolate host and domains
+ - IPC namespaces: Isolates interprocess communications
+ - User namespaces: Isolate user and group IDs
+ - Time namespaces: Isolates time machine
+
+
+
+Describe in detail what happens when you run `podman/docker run hello-world`?
+
+Docker/Podman CLI passes your request to Docker daemon.
+Docker/Podman daemon downloads the image from Docker Hub
+Docker/Podman daemon creates a new container by using the image it downloaded
+Docker/Podman daemon redirects output from container to Docker CLI which redirects it to the standard output
+
+
+
+Describe difference between cgroups and namespaces
+cgroup: Control Groups provide a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour.
+namespace: wraps a global system resource in an abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of the global resource.
+
+In short:
+
+Cgroups = limits how much you can use;
+namespaces = limits what you can see (and therefore use)
+
+Cgroups involve resource metering and limiting:
+memory
+CPU
+block I/O
+network
+
+Namespaces provide processes with their own view of the system
+
+Multiple namespaces: pid,net, mnt, uts, ipc, user
+
+
+
+#### Containers - Docker Architecture
+
+
+Which components/layers compose the Docker technology?
+
+1. Runtime - responsible for starting and stopping containers
+2. Daemon - implements the Docker API and takes care of managing images (including builds), authentication, security, networking, etc.
+3. Orchestrator
+
+
+
+What components are part of the Docker engine?
+
+ - Docker daemon
+ - containerd
+ - runc
+
+
+
+What is the low-level runtime?
+
+ - The low level runtime is called runc
+ - It manages every container running on Docker host
+ - Its purpose is to interact with the underlying OS to start and stop containers
+ - Its reference implementation is of the OCI (Open Containers Initiative) container-runtime-spec
+ - It's a small CLI wrapper for libcontainer
+
+
+
+What is the high-level runtime?
+
+ - The high level runtime is called containerd
+ - It was developed by Docker Inc and at some point donated to CNCF
+ - It manages the whole lifecycle of a container - start, stop, remove and pause
+ - It take care of setting up network interfaces, volume, pushing and pulling images, ...
+ - It manages the lower level runtime (runc) instances
+ - It's used both by Docker and Kubernetes as a container runtime
+ - It sits between Docker daemon and runc at the OCI layer
+
+Note: running `ps -ef | grep -i containerd` on a system with Docker installed and running, you should see a process of containerd
+
+
+
+True or False? The docker daemon (dockerd) performs lower-level tasks compared to containerd
+
+False. The Docker daemon performs higher-level tasks compared to containerd.
+It's responsible for managing networks, volumes, images, ...
+
+
+
+Describe in detail what happens when you run `docker pull image:tag`?
+Docker CLI passes your request to Docker daemon. Dockerd Logs shows the process
+
+docker.io/library/busybox:latest resolved to a manifestList object with 9 entries; looking for a unknown/amd64 match
+
+found match for linux/amd64 with media type application/vnd.docker.distribution.manifest.v2+json, digest sha256:400ee2ed939df769d4681023810d2e4fb9479b8401d97003c710d0e20f7c49c6
+
+pulling blob \"sha256:61c5ed1cbdf8e801f3b73d906c61261ad916b2532d6756e7c4fbcacb975299fb Downloaded 61c5ed1cbdf8 to tempfile /var/lib/docker/tmp/GetImageBlob909736690
+
+Applying tar in /var/lib/docker/overlay2/507df36fe373108f19df4b22a07d10de7800f33c9613acb139827ba2645444f7/diff" storage-driver=overlay2
+
+Applied tar sha256:514c3a3e64d4ebf15f482c9e8909d130bcd53bcc452f0225b0a04744de7b8c43 to 507df36fe373108f19df4b22a07d10de7800f33c9613acb139827ba2645444f7, size: 1223534
+
+
+
+Describe in detail what happens when you run a container
+
+1. The Docker client converts the run command into an API payload
+2. It then POST the payload to the API endpoint exposed by the Docker daemon
+3. When the daemon receives the command to create a new container, it makes a call to containerd via gRPC
+4. containerd converts the required image into an OCI bundle and tells runc to use that bundle for creating the container
+5. runc interfaces with the OS kernel to pull together the different constructs (namespace, cgroups, etc.) used for creating the container
+6. Container process is started as a child-process of runc
+7. Once it starts, runc exists
+
+
+
+True or False? Killing the Docker daemon will kill all the running containers
+
+False. While this was true at some point, today the container runtime isn't part of the daemon (it's part of containerd and runc) so stopping or killing the daemon will not affect running containers.
+
+
+
+True or False? containerd forks a new instance runc for every container it creates
+
+True
+
+
+
+True or False? Running a dozen of containers will result in having a dozen of runc processes
+
+False. Once a container is created, the parent runc process exists.
+
+
+
+What is shim in regards to Docker?
+
+shim is the process that becomes the container's parent when runc process exists. It's responsible for:
+
+ - Reporting exit code back to the Docker daemon
+ - Making sure the container doesn't terminate if the daemon is being restarted. It does so by keeping the stdout and stdin open
+
+
+
+What `podman commit` does?. When will you use it?
+
+Create a new image from a container’s changes
+
+
+
+How would you transfer data from one container into another?
+
+
+
+What happens to data of the container when a container exists?
+
+
+
+Explain what each of the following commands do:
+
+ * docker run
+ * docker rm
+ * docker ps
+ * docker pull
+ * docker build
+ * docker commit
+
+
+
+
+How do you remove old, non running, containers?
+
+1. To remove one or more Docker images use the docker container rm command followed by the ID of the containers you want to remove.
+2. The docker system prune command will remove all stopped containers, all dangling images, and all unused networks
+3. docker rm $(docker ps -a -q) - This command will delete all stopped containers. The command docker ps -a -q will return all existing container IDs and pass them to the rm command which will delete them. Any running containers will not be deleted.
+
+
+
+How the Docker client communicates with the daemon?
+
+Via the local socket at `/var/run/docker.sock`
+
+
+
+Explain Docker interlock
+
+
+
+What is Docker Repository?
+
+
+
+Explain image layers
+
+A Docker image is built up from a series of layers. Each layer represents an instruction in the image’s Dockerfile. Each layer except the very last one is read-only.
+Each layer is only a set of differences from the layer before it. The layers are stacked on top of each other. When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.
+The major difference between a container and an image is the top writable layer. All writes to the container that add new or modify existing data are stored in this writable layer. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged.
+Because each container has its own writable container layer, and all changes are stored in this container layer, multiple containers can share access to the same underlying image and yet have their own data state.
+
+
+
+What best practices are you familiar related to working with containers?
+
+
+
+How do you manage persistent storage in Docker?
+
+
+
+How can you connect from the inside of your container to the localhost of your host, where the container runs?
+
+
+
+How do you copy files from Docker container to the host and vice versa?
+
+
+#### Containers - Docker Compose
+
+
+Explain what is Docker compose and what is it used for
+
+Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
+
+For example, you can use it to set up ELK stack where the services are: elasticsearch, logstash and kibana. Each running in its own container.
+In general, it's useful for running applications which composed out of several different services. It let's you manage it as one deployed app, instead of different multiple separate services.
+
+
+
+Describe the process of using Docker Compose
+
+* Define the services you would like to run together in a docker-compose.yml file
+* Run `docker-compose up` to run the services
+
+
+#### Containers - Docker Images
+
+
+What is Docker Hub?
+
+One of the most common registries for retrieving images.
+
+
+
+How to push an image to Docker Hub?
+
+`docker image push [username]/[image name]:[tag]`
+
+For example:
+
+`docker image mario/web_app:latest`
+
+
+
+What is the difference between Docker Hub and Docker cloud?
+
+Docker Hub is a native Docker registry service which allows you to run pull
+and push commands to install and deploy Docker images from the Docker Hub.
+
+Docker Cloud is built on top of the Docker Hub so Docker Cloud provides
+you with more options/features compared to Docker Hub. One example is
+Swarm management which means you can create new swarms in Docker Cloud.
+
+
+
+Explain Multi-stage builds
+
+Multi-stages builds allow you to produce smaller container images by splitting the build process into multiple stages.
+
+As an example, imagine you have one Dockerfile where you first build the application and then run it. The whole build process of the application might be using packages and libraries you don't really need for running the application later. Moreover, the build process might produce different artifacts which not all are needed for running the application.
+
+How do you deal with that? Sure, one option is to add more instructions to remove all the unnecessary stuff but, there are a couple of issues with this approach:
+1. You need to know what to remove exactly and that might be not as straightforward as you think
+2. You add new layers which are not really needed
+
+A better solution might be to use multi-stage builds where one stage (the build process) is passing the relevant artifacts/outputs to the stage that runs the application.
+
+
+
+True or False? In multi-stage builds, artifacts can be copied between stages
+
+True. This allows us to eventually produce smaller images.
+
+
+
+What .dockerignore
is used for?
+
+By default, Docker uses everything (all the files and directories) in the directory you use as build context.
+`.dockerignore` used for excluding files and directories from the build context
+
+
+#### Containers - Networking
+
+
+What container network standards or architectures are you familiar with?
+
+CNM (Container Network Model):
+ * Requires distrubited key value store (like etcd for example) for storing the network configuration
+ * Used by Docker
+CNI (Container Network Interface):
+ * Network configuration should be in JSON format
+
+
+#### Containers - Docker Networking
+
+
+What network specification Docker is using and how its implementation is called?
+
+Docker is using the CNM (Container Network Model) design specification.
+The implementation of CNM specification by Docker is called "libnetwork". It's written in Go.
+
+
+
+Explain the following blocks in regards to CNM:
+
+ * Networks
+ * Endpoints
+ * Sandboxes
+
+ * Networks: software implementation of an switch. They used for grouping and isolating a collection of endpoints.
+ * Endpoints: Virtual network interfaces. Used for making connections.
+ * Sandboxes: Isolated network stack (interfaces, routing tables, ports, ...)
+
+
+
+True or False? If you would like to connect a container to multiple networks, you need multiple endpoints
+
+True. An endpoint can connect only to a single network.
+
+
+
+What are some features of libnetwork?
+
+* Native service discovery
+* ingress-based load balancer
+* network control plane and management plane
+
+
+#### Containers - Security
+
+
+What security best practices are there regarding containers?
+
+ * Install only the necessary packages in the container
+ * Don't run containers as root when possible
+ * Don't mount the Docker daemon unix socket into any of the containers
+ * Set volumes and container's filesystem to read only
+ * DO NOT run containers with `--privilged` flag
+
+
+
+A container can cause a kernel panic and bring down the whole host. What preventive actions can you apply to avoid this specific situation?
+
+ * Install only the necessary packages in the container
+ * Set volumes and container's filesystem to read only
+ * DO NOT run containers with `--privilged` flag
+
+
+#### Containers - Docker in Production
+
+
+What are some best practices you following in regards to using containers in production?
+
+Images:
+ * Use images from official repositories
+ * Include only the packages you are going to use. Nothing else.
+ * Specify a tag in FROM instruction. Not using a tag means you'll always pull the latest, which changes over time and might result in unexpected result.
+ * Do not use environment variables to share secrets
+ * Keep images small! - you want them only to include what is required for the application to run successfully. Nothing else.
+Components:
+ * Secured connection between components (e.g. client and server)
+
+
+
+True or False? It's recommended for production environments that Docker client and server will communicate over network using HTTP socket
+
+False. Communication between client and server shouldn't be done over HTTP since it's insecure. It's better to enforce the daemon to only accept network connection that are secured with TLS.
+Basically, the Docker daemon will only accept secured connections with certificates from trusted CA.
+
+
+
+What forms of self-healing options available for Docker containers?
+
+Restart Policies. It allows you to automatically restart containers after certain events.
+
+
+
+What restart policies are you familiar with?
+
+ * always: restart the container when it's stopped (not with `docker container stop`)
+ * unless-stopped: restart the container unless it was in stopped status
+ * no: don't restart the container at any point (default policy)
+ * on-failure: restart the container when it exists due to an error (= exit code different than zero)
+
+
+#### Containers - Docker Misc
+
+Explain what is Docker Bench
+
+
diff --git a/exercises/devops/README.md b/exercises/devops/README.md
new file mode 100644
index 0000000..d743d30
--- /dev/null
+++ b/exercises/devops/README.md
@@ -0,0 +1,435 @@
+## DevOps
+
+
+What is DevOps?
+
+You can answer it by describing what DevOps means to you and/or rely on how companies define it. I've put here a couple of examples.
+
+Amazon:
+
+"DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market."
+
+Microsoft:
+
+"DevOps is the union of people, process, and products to enable continuous delivery of value to our end users. The contraction of “Dev” and “Ops” refers to replacing siloed Development and Operations to create multidisciplinary teams that now work together with shared and efficient practices and tools. Essential DevOps practices include agile planning, continuous integration, continuous delivery, and monitoring of applications."
+
+Red Hat:
+
+"DevOps describes approaches to speeding up the processes by which an idea (like a new software feature, a request for enhancement, or a bug fix) goes from development to deployment in a production environment where it can provide value to the user. These approaches require that development teams and operations teams communicate frequently and approach their work with empathy for their teammates. Scalability and flexible provisioning are also necessary. With DevOps, those that need power the most, get it—through self service and automation. Developers, usually coding in a standard development environment, work closely with IT operations to speed software builds, tests, and releases—without sacrificing reliability."
+
+Google:
+
+"...The organizational and cultural movement that aims to increase software delivery velocity, improve service reliability, and build shared ownership among software stakeholders"
+
+
+
+What are the benefits of DevOps? What can it help us to achieve?
+
+ * Collaboration
+ * Improved delivery
+ * Security
+ * Speed
+ * Scale
+ * Reliability
+
+
+
+What are the anti-patterns of DevOps?
+
+A couple of examples:
+
+* One person is in charge of specific tasks. For example there is only one person who is allowed to merge the code of everyone else into the repository.
+* Treating production differently from development environment. For example, not implementing security in development environment
+* Not allowing someone to push to production on Friday ;)
+
+
+
+How would you describe a successful DevOps engineer or a team?
+
+The answer can focus on:
+
+* Collaboration
+* Communication
+* Set up and improve workflows and processes (related to testing, delivery, ...)
+* Dealing with issues
+
+Things to think about:
+
+* What DevOps teams or engineers should NOT focus on or do?
+* Do DevOps teams or engineers have to be innovative or practice innovation as part of their role?
+
+
+#### Tooling
+
+
+What are you taking into consideration when choosing a tool/technology?
+
+A few ideas to think about:
+
+ * mature/stable vs. cutting edge
+ * community size
+ * architecture aspects - agent vs. agentless, master vs. masterless, etc.
+ * learning curve
+
+
+
+Can you describe which tool or platform you chose to use in some of the following areas and how?
+
+ * CI/CD
+ * Provisioning infrastructure
+ * Configuration Management
+ * Monitoring & alerting
+ * Logging
+ * Code review
+ * Code coverage
+ * Issue Tracking
+ * Containers and Containers Orchestration
+ * Tests
+
+This is a more practical version of the previous question where you might be asked additional specific questions on the technology you chose
+
+ * CI/CD - Jenkins, Circle CI, Travis, Drone, Argo CD, Zuul
+ * Provisioning infrastructure - Terraform, CloudFormation
+ * Configuration Management - Ansible, Puppet, Chef
+ * Monitoring & alerting - Prometheus, Nagios
+ * Logging - Logstash, Graylog, Fluentd
+ * Code review - Gerrit, Review Board
+ * Code coverage - Cobertura, Clover, JaCoCo
+ * Issue tracking - Jira, Bugzilla
+ * Containers and Containers Orchestration - Docker, Podman, Kubernetes, Nomad
+ * Tests - Robot, Serenity, Gauge
+
+
+
+A team member of yours, suggests to replace the current CI/CD platform used by the organization with a new one. How would you reply?
+
+Things to think about:
+
+* What we gain from doing so? Are there new features in the new platform? Does the new platform deals with some of the limitations presented in the current platform?
+* What this suggestion is based on? In other words, did he/she tried out the new platform? Was there extensive technical research?
+* What does the switch from one platform to another will require from the organization? For example, training users who use the platform? How much time the team has to invest in such move?
+
+
+#### Version Control
+
+
+What is Version Control?
+
+* Version control is the sytem of tracking and managing changes to software code.
+* It helps software teams to manage changes to source code over time.
+* Version control also helps developers move faster and allows software teams to preserve efficiency and agility as the team scales to include more developers.
+
+
+
+What is a commit?
+
+* In Git, a commit is a snapshot of your repo at a specific point in time.
+* The git commit command will save all staged changes, along with a brief description from the user, in a “commit” to the local repository.
+
+
+
+What is a merge?
+
+* Merging is Git's way of putting a forked history back together again. The git merge command lets you take the independent lines of development created by git branch and integrate them into a single branch.
+
+
+
+What is a merge conflict?
+
+* A merge conflict is an event that occurs when Git is unable to automatically resolve differences in code between two commits. When all the changes in the code occur on different lines or in different files, Git will successfully merge commits without your help.
+
+
+
+What best practices are you familiar with regarding version control?
+
+* Use a descriptive commit message
+* Make each commit a logical unit
+* Incorporate others' changes frequently
+* Share your changes frequently
+* Coordinate with your co-workers
+* Don't commit generated files
+
+
+
+Would you prefer a "configuration->deployment" model or "deployment->configuration"? Why?
+
+Both have advantages and disadvantages.
+With "configuration->deployment" model for example, where you build one image to be used by multiple deployments, there is less chance of deployments being different from one another, so it has a clear advantage of a consistent environment.
+
+
+
+Explain mutable vs. immutable infrastructure
+
+In mutable infrastructure paradigm, changes are applied on top of the existing infrastructure and over time
+the infrastructure builds up a history of changes. Ansible, Puppet and Chef are examples of tools which
+follow mutable infrastructure paradigm.
+
+In immutable infrastructure paradigm, every change is actually a new infrastructure. So a change
+to a server will result in a new server instead of updating it. Terraform is an example of technology
+which follows the immutable infrastructure paradigm.
+
+
+#### Software Distribution
+
+
+Explain "Software Distribution"
+
+Read [this](https://venam.nixers.net/blog/unix/2020/03/29/distro-pkgs.html) fantastic article on the topic.
+
+From the article: "Thus, software distribution is about the mechanism and the community that takes the burden and decisions to build an assemblage of coherent software that can be shipped."
+
+
+
+Why are there multiple software distributions? What differences they can have?
+
+Different distributions can focus on different things like: focus on different environments (server vs. mobile vs. desktop), support specific hardware, specialize in different domains (security, multimedia, ...), etc. Basically, different aspects of the software and what it supports, get different priority in each distribution.
+
+
+
+What is a Software Repository?
+
+Wikipedia: "A software repository, or “repo” for short, is a storage location for software packages. Often a table of contents is stored, as well as metadata."
+
+Read more [here](https://en.wikipedia.org/wiki/Software_repository)
+
+
+
+What ways are there to distribute software? What are the advantages and disadvantages of each method?
+
+ * Source - Maintain build script within version control system so that user can build your app after cloning repository. Advantage: User can quickly checkout different versions of application. Disadvantage: requires build tools installed on users machine.
+ * Archive - collect all your app files into one archive (e.g. tar) and deliver it to the user. Advantage: User gets everything he needs in one file. Disadvantage: Requires repeating the same procedure when updating, not good if there are a lot of dependencies.
+ * Package - depends on the OS, you can use your OS package format (e.g. in RHEL/Fefodra it's RPM) to deliver your software with a way to install, uninstall and update it using the standard packager commands. Advantages: Package manager takes care of support for installation, uninstallation, updating and dependency management. Disadvantage: Requires managing package repository.
+ * Images - Either VM or container images where your package is included with everything it needs in order to run successfully. Advantage: everything is preinstalled, it has high degree of environment isolation. Disadvantage: Requires knowledge of building and optimizing images.
+
+
+
+Are you familiar with "The Cathedral and the Bazaar models"? Explain each of the models
+
+* Cathedral - source code released when software is released
+* Bazaar - source code is always available publicly (e.g. Linux Kernel)
+
+
+
+What is caching? How does it works? Why is it important?
+
+Caching is fast access to frequently used resources which are computationally expensive or IO intensive and do not change often. There can be several layers of cache that can start from CPU caches to distributed cache systems. Common ones are in memory caching and distributed caching.
Caches are typically data structures that contains some data, such as a hashtable or dictionary. However, any data structure can provide caching capabilities, like set, sorted set, sorted dictionary etc. While, caching is used in many applications, they can create subtle bugs if not implemented correctly or used correctly. For example,cache invalidation, expiration or updating is usually quite challenging and hard.
+
+
+
+Explain stateless vs. stateful
+
+Stateless applications don't store any data in the host which makes it ideal for horizontal scaling and microservices.
+Stateful applications depend on the storage to save state and data, typically databases are stateful applications.
+
+
+
+What is Reliability? How does it fit DevOps?
+
+Reliability, when used in DevOps context, is the ability of a system to recover from infrastructure failure or disruption. Part of it is also being able to scale based on your organization or team demands.
+
+
+
+What "Availability" means? What means are there to track Availability of a service?
+
+
+
+Why 100% availability isn't a target? Why most companies or teams set it to be 99%.X?
+
+
+
+Describe the workflow of setting up some type of web server (Apache, IIS, Tomcat, ...)
+
+
+
+How a web server works?
+
+
+
+Explain "Open Source"
+
+
+
+Describe me the architecture of service/app/project/... you designed and/or implemented
+
+
+
+What types of tests are you familiar with?
+
+Styling, unit, functional, API, integration, smoke, scenario, ...
+
+You should be able to explain those that you mention.
+
+
+
+You need to install periodically a package (unless it's already exists) on different operating systems (Ubuntu, RHEL, ...). How would you do it?
+
+There are multiple ways to answer this question (there is no right and wrong here):
+
+* Simple cron job
+* Pipeline with configuration management technology (such Puppet, Ansible, Chef, etc.)
+...
+
+
+
+What is Chaos Engineering?
+
+Wikipedia: "Chaos engineering is the discipline of experimenting on a software system in production in order to build confidence in the system's capability to withstand turbulent and unexpected conditions"
+
+Read about Chaos Engineering [here](https://en.wikipedia.org/wiki/Chaos_engineering)
+
+
+
+What is "infrastructure as code"? What implementation of IAC are you familiar with?
+
+IAC (infrastructure as code) is a declerative approach of defining infrastructure or architecture of a system. Some implementations are ARM templates for Azure and Terraform that can work across multiple cloud providers.
+
+
+
+What benefits infrastructure-as-code has?
+
+- fully automated process of provisioning, modifying and deleting your infrastructure
+- version control for your infrastructure which allows you to quickly rollback to previous versions
+- validate infrastructure quality and stability with automated tests and code reviews
+- makes infrastructure tasks less repetitive
+
+
+
+How do you manage build artifacts?
+
+Build artifacts are usually stored in a repository. They can be used in release pipelines for deployment purposes. Usually there is retention period on the build artifacts.
+
+
+
+What Continuous Integration solution are you using/prefer and why?
+
+
+
+What deployment strategies are you familiar with or have used?
+
+ There are several deployment strategies:
+ * Rolling
+ * Blue green deployment
+ * Canary releases
+ * Recreate strategy
+
+
+
+
+You joined a team where everyone developing one project and the practice is to run tests locally on their workstation and push it to the repository if the tests passed. What is the problem with the process as it is now and how to improve it?
+
+
+
+Explain test-driven development (TDD)
+
+
+
+Explain agile software development
+
+
+
+What do you think about the following sentence?: "implementing or practicing DevOps leads to more secure software"
+
+
+
+Do you know what is a "post-mortem meeting"? What is your opinion on that?
+
+
+
+What is a configuration drift? What problems is it causing?
+
+Configuration drift happens when in an environment of servers with the exact same configuration and software, a certain server
+or servers are being applied with updates or configuration which other servers don't get and over time these servers become
+slightly different than all others.
+
+This situation might lead to bugs which hard to identify and reproduce.
+
+
+
+How to deal with a configuration drift?
+ Configuration drift can be avoided with desired state configuration (DSC) implementation. Desired state configuration can be a declarative file that defined how a system should be. There are tools to enforce desired state such a terraform or azure dsc. There are incramental or complete strategies.
+
+
+
+Explain Declarative and Procedural styles. The technologies you are familiar with (or using) are using procedural or declarative style?
+
+Declarative - You write code that specifies the desired end state
+Procedural - You describe the steps to get to the desired end state
+
+Declarative Tools - Terraform, Puppet, CloudFormation
+Procedural Tools - Ansible, Chef
+
+To better emphasize the difference, consider creating two virtual instances/servers.
+In declarative style, you would specify two servers and the tool will figure out how to reach that state.
+In procedural style, you need to specify the steps to reach the end state of two instances/servers - for example, create a loop and in each iteration of the loop create one instance (running the loop twice of course).
+
+
+
+Do you have experience with testing cross-projects changes? (aka cross-dependency)
+
+Note: cross-dependency is when you have two or more changes to separate projects and you would like to test them in mutual build instead of testing each change separately.
+
+
+
+Have you contributed to an open source project? Tell me about this experience
+
+
+
+What is Distributed Tracing?
+
+
+
+What is GitOps?
+
+GitLab: "GitOps is an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD tooling, and applies them to infrastructure automation".
+
+Read more [here](https://about.gitlab.com/topics/gitops)
+
+
+#### SRE
+
+
+What are the differences between SRE and DevOps?
+
+Google: "One could view DevOps as a generalization of several core SRE principles to a wider range of organizations, management structures, and personnel."
+
+Read more about it [here](https://sre.google/sre-book/introduction)
+
+
+
+What SRE team is responsible for?
+
+Google: "the SRE team is responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of their services"
+
+Read more about it [here](https://sre.google/sre-book/introduction)
+
+
+
+What is an error budget?
+
+Atlassian: "An error budget is the maximum amount of time that a technical system can fail without contractual consequences."
+
+Read more about it [here](https://www.atlassian.com/incident-management/kpis/error-budget)
+
+
+
+What do you think about the following statement: "100% is the only right availability target for a system"
+
+Wrong. No system can guarantee 100% availability as no system is safe from experiencing zero downtime.
+Many systems and services will fall somewhere between 99% and 100% uptime (or at least this is how most systems and services should be).
+
+
+
+What are MTTF (mean time to failure) and MTTR (mean time to repair)? What these metrics help us to evaluate?
+
+ * MTTF (mean time to failure) other known as uptime, can be defined as how long the system runs before if fails.
+ * MTTR (mean time to recover) on the other hand, is the amount of time it takes to repair a broken system.
+ * MTBF (mean time between failures) is the amount of time between failures of the system.
+
+
+
+What is the role of monitoring in SRE?
+
+Google: "Monitoring is one of the primary means by which service owners keep track of a system’s health and availability"
+
+Read more about it [here](https://sre.google/sre-book/introduction)
+
diff --git a/exercises/git/README.md b/exercises/git/README.md
new file mode 100644
index 0000000..1b1e3c3
--- /dev/null
+++ b/exercises/git/README.md
@@ -0,0 +1,200 @@
+## Git
+
+|Name|Topic|Objective & Instructions|Solution|Comments|
+|--------|--------|------|----|----|
+| My first Commit | Commit | [Exercise](exercises/git/commit_01.md) | [Solution](exercises/git/solutions/commit_01_solution.md) | |
+| Time to Branch | Branch | [Exercise](exercises/git/branch_01.md) | [Solution](exercises/git/solutions/branch_01_solution.md) | |
+| Squashing Commits | Commit | [Exercise](exercises/git/squashing_commits.md) | [Solution](exercises/git/solutions/squashing_commits.md) | |
+
+
+How do you know if a certain directory is a git repository?
+
+You can check if there is a ".git" directory.
+
+
+
+Explain the following: git directory
, working directory
and staging area
+
+This answer taken from [git-scm.com](https://git-scm.com/book/en/v1/Getting-Started-Git-Basics#_the_three_states)
+
+"The Git directory is where Git stores the meta data and object database for your project. This is the most important part of Git, and it is what is copied when you clone a repository from another computer.
+
+The working directory is a single checkout of one version of the project. These files are pulled out of the compressed database in the Git directory and placed on disk for you to use or modify.
+
+The staging area is a simple file, generally contained in your Git directory, that stores information about what will go into your next commit. It’s sometimes referred to as the index, but it’s becoming standard to refer to it as the staging area."
+
+
+
+What is the difference between git pull
and git fetch
?
+
+Shortly, git pull = git fetch + git merge
+
+When you run git pull, it gets all the changes from the remote or central
+repository and attaches it to your corresponding branch in your local repository.
+
+git fetch gets all the changes from the remote repository, stores the changes in
+a separate branch in your local repository
+
+
+
+How to check if a file is tracked and if not, then track it?
+
+There are different ways to check whether a file is tracked or not:
+
+ - `git ls-file ` -> exit code of 0 means it's tracked
+ - `git blame `
+ ...
+
+
+
+How can you see which changes have done before committing them?
+
+`git diff```
+
+
+
+What git status
does?
+
+
+
+You have two branches - main and devel. How do you make sure devel is in sync with main?
+
+```
+git checkout main
+git pull
+git checkout devel
+git merge main
+```
+
+
+#### Git - Merge
+
+
+You have two branches - main and devel. How do you put devel into main?
+
+git checkout main
+git merge devel
+git push origin main
+
+
+
+How to resolve git merge conflicts?
+
+
+First, you open the files which are in conflict and identify what are the conflicts.
+Next, based on what is accepted in your company or team, you either discuss with your
+colleagues on the conflicts or resolve them by yourself
+After resolving the conflicts, you add the files with `git add `
+Finally, you run `git rebase --continue`
+
+
+
+
+What merge strategies are you familiar with?
+
+Mentioning two or three should be enough and it's probably good to mention that 'recursive' is the default one.
+
+recursive
+resolve
+ours
+theirs
+
+This page explains it the best: https://git-scm.com/docs/merge-strategies
+
+
+
+Explain Git octopus merge
+
+Probably good to mention that it's:
+
+ * It's good for cases of merging more than one branch (and also the default of such use cases)
+ * It's primarily meant for bundling topic branches together
+
+This is a great article about Octopus merge: http://www.freblogg.com/2016/12/git-octopus-merge.html
+
+
+
+What is the difference between git reset
and git revert
?
+
+
+
+`git revert` creates a new commit which undoes the changes from last commit.
+
+`git reset` depends on the usage, can modify the index or change the commit which the branch head
+is currently pointing at.
+
+
+
+#### Git - Rebase
+
+
+You would like to move forth commit to the top. How would you achieve that?
+
+Using the `git rebase` command
+
+
+
+In what situations are you using git rebase
?
+
+
+
+How do you revert a specific file to previous commit?
+
+```
+git checkout HEAD~1 -- /path/of/the/file
+```
+
+
+
+How to squash last two commits?
+
+
+
+What is the .git
directory? What can you find there?
+ The .git
folder contains all the information that is necessary for your project in version control and all the information about commits, remote repository address, etc. All of them are present in this folder. It also contains a log that stores your commit history so that you can roll back to history.
+
+
+This info copied from [https://stackoverflow.com/questions/29217859/what-is-the-git-folder](https://stackoverflow.com/questions/29217859/what-is-the-git-folder)
+
+
+
+What are some Git anti-patterns? Things that you shouldn't do
+
+ * Not waiting too long between commits
+ * Not removing the .git directory :)
+
+
+
+How do you remove a remote branch?
+
+You delete a remote branch with this syntax:
+
+git push origin :[branch_name]
+
+
+
+Are you familiar with gitattributes? When would you use it?
+
+gitattributes allow you to define attributes per pathname or path pattern.
+
+You can use it for example to control endlines in files. In Windows and Unix based systems, you have different characters for new lines (\r\n and \n accordingly). So using gitattributes we can align it for both Windows and Unix with `* text=auto` in .gitattributes for anyone working with git. This is way, if you use the Git project in Windows you'll get \r\n and if you are using Unix or Linux, you'll get \n.
+
+
+
+How do you discard local file changes? (before commit)
+
+`git checkout -- `
+
+
+
+How do you discard local commits?
+
+`git reset HEAD~1` for removing last commit
+If you would like to also discard the changes you `git reset --hard``
+
+
+
+True or False? To remove a file from git but not from the filesystem, one should use git rm
+
+False. If you would like to keep a file on your filesystem, use `git reset `
+
diff --git a/exercises/kubernetes/README.md b/exercises/kubernetes/README.md
new file mode 100644
index 0000000..d1c6e3f
--- /dev/null
+++ b/exercises/kubernetes/README.md
@@ -0,0 +1,1953 @@
+## Kubernetes
+
+
+
+### Kubernetes Exercises
+
+#### Developer & "Regular" User Path
+
+|Name|Topic|Objective & Instructions|Solution|Comments|
+|--------|--------|------|----|----|
+| My First Pod | Pods | [Exercise](pods_01.md) | [Solution](solutions/pods_01_solution.md)
+| "Killing" Containers | Pods | [Exercise](killing_containers.md) | [Solution](solutions/killing_containers.md)
+| Creating a Service | Service | [Exercise](services_01.md) | [Solution](solutions/services_01_solution.md)
+| Creating a ReplicaSet | ReplicaSet | [Exercise](replicaset_01.md) | [Solution](solutions/replicaset_01_solution.md)
+| Operating ReplicaSets | ReplicaSet | [Exercise](replicaset_02.md) | [Solution](solutions/replicaset_02_solution.md)
+| ReplicaSets Selectors | ReplicaSet | [Exercise](replicaset_03.md) | [Solution](solutions/replicaset_03_solution.md)
+
+### Kubernetes Self Assesment
+
+
+What is Kubernetes? Why organizations are using it?
+
+Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
+
+To understand what Kubernetes is good for, let's look at some examples:
+
+* You would like to run a certain application in a container on multiple different locations. Sure, if it's 2-3 servers/locations, you can do it by yourself but it can be challenging to scale it up to additional multiple location.
+* Performing updates and changes across hundreds of containers
+* Handle cases where the current load requires to scale up (or down)
+
+
+
+When or why NOT to use Kubernetes?
+
+ - If you are big team of engineers (e.g. 200) deploying applications using containers and you need to manage scaling, rolling out updates, etc. You probably want to use Kubernetes
+
+ - If you manage low level infrastructure or baremetals, Kubernetes is probably not what you need or want
+ - If you are a small team (e.g. 20-50 engineers) Kubernetes might be an overkill (even if you need scale, rolling out updates, etc.)
+
+
+
+What Kubernetes objects are there?
+
+ * Pod
+ * Service
+ * ReplicationController
+ * ReplicaSet
+ * DaemonSet
+ * Namespace
+ * ConfigMap
+ ...
+
+
+
+What fields are mandatory with any Kubernetes object?
+
+metadata, kind and apiVersion
+
+
+
+What actions or operations you consider as best practices when it comes to Kuberentes?
+
+ - Always make sure Kubernetes YAML files are valid. Applying automated checks and pipelines is recommended.
+ - Always specify requests and limits to prevent situation where containers are using the entire cluster memory which may lead to OOM issue
+
+
+
+What is kubectl?
+
+Kubectl is the Kubernetes command line tool that allows you to run commands against Kubernetes clusters. For example, you can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.
+
+
+
+What Kubernetes objects do you usually use when deploying applications in Kubernetes?
+
+* Deployment - creates and the Pods and watches them
+* Service: route traffic to Pods internally
+* Ingress: route traffic from outside the cluster
+
+
+#### Kubernetes - Cluster
+
+
+What is a Kubernetes Cluster?
+
+Red Hat Definition: "A Kubernetes cluster is a set of node machines for running containerized applications. If you’re running Kubernetes, you’re running a cluster.
+At a minimum, a cluster contains a worker node and a master node."
+
+Read more [here](https://www.redhat.com/en/topics/containers/what-is-a-kubernetes-cluster)
+
+
+
+What is a Node?
+
+A node is a virtual or a physical machine that serves as a worker for running the applications.
+It's recommended to have at least 3 nodes in a production environment.
+
+
+
+What the master node is responsible for?
+
+The master coordinates all the workflows in the cluster:
+
+* Scheduling applications
+* Managing desired state
+* Rolling out new updates
+
+
+
+Which command will list the nodes of the cluster?
+
+`kubectl get nodes`
+
+
+
+True or False? Every cluster must have 0 or more master nodes and at least 1 worker
+
+False. A Kubernetes cluster consists of at least 1 master and can have 0 workers (although that wouldn't be very useful...)
+
+
+
+What are the components of the master node?
+
+ * API Server - the Kubernetes API. All cluster components communicate through it
+ * Scheduler - assigns an application with a worker node it can run on
+ * Controller Manager - cluster maintenance (replications, node failures, etc.)
+ * etcd - stores cluster configuration
+
+
+
+What are the components of a worker node?
+
+ * Kubelet - an agent responsible for node communication with the master.
+ * Kube-proxy - load balancing traffic between app components
+ * Container runtime - the engine runs the containers (Podman, Docker, ...)
+
+
+
+Place the components on the right side of the image in the right place in the drawing
+
+
+
+
+
+
+You are managing multiple Kubernetes clusters. How do you quickly change between the clusters using kubectl?
+
+`kubectl config use-context`
+
+
+
+How do you prevent high memory usage in your Kubernetes cluster and possibly issues like memory leak and OOM?
+
+Apply requests and limits, especially on third party applications (where the uncertainty is even bigger)
+
+
+
+Do you have experience with deploying a Kubernetes cluster? If so, can you describe the process in high-level?
+
+1. Create multiple instances you will use as Kubernetes nodes/workers. Create also an instance to act as the Master. The instances can be provisioned in a cloud or they can be virtual machines on bare metal hosts.
+2. Provision a certificate authority that will be used to generate TLS certificates for the different components of a Kubernetes cluster (kubelet, etcd, ...)
+ 1. Generate a certificate and private key for the different components
+3. Generate kubeconfigs so the different clients of Kubernetes can locate the API servers and authenticate.
+4. Generate encryption key that will be used for encrypting the cluster data
+5. Create an etcd cluster
+
+
+#### Kubernetes - Pods
+
+
+Explain what is a Pod
+
+A Pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.
+Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
+
+
+
+Deploy a pod called "my-pod" using the nginx:alpine image
+
+`kubectl run my-pod --image=nginx:alpine --restart=Never`
+
+If you are a Kubernetes beginner you should know that this is not a common way to run Pods. The common way is to run a Deployment which in turn runs Pod(s).
+In addition, Pods and/or Deployments are usually defined in files rather than executed directly using only the CLI arguments.
+
+
+
+What are your thoughts on "Pods are not meant to be created directly"?
+
+Pods are usually indeed not created directly. You'll notice that Pods are usually created as part of another entities such as Deployments or ReplicaSets.
+If a Pod dies, Kubernetes will not bring it back. This is why it's more useful for example to define ReplicaSets that will make sure that a given number of Pods will always run, even after a certain Pod dies.
+
+
+
+How many containers can a pod contain?
+
+A pod can include multiple containers but in most cases it would probably be one container per pod.
+
+
+
+What use cases exist for running multiple containers in a single pod?
+
+A web application with separate (= in their own containers) logging and monitoring components/adapters is one examples.
+A CI/CD pipeline (using Tekton for example) can run multiple containers in one Pod if a Task contains multiple commands.
+
+
+
+What are the possible Pod phases?
+
+ * Running - The Pod bound to a node and at least one container is running
+ * Failed - At least one container in the Pod terminated with a failure
+ * Succeeded - Every container in the Pod terminated with success
+ * Unknown - Pod's state could not be obtained
+ * Pending - Containers are not yet running (Perhaps images are still being downloaded or the pod wasn't scheduled yet)
+
+
+
+True or False? By default, pods are isolated. This means they are unable to receive traffic from any source
+
+False. By default, pods are non-isolated = pods accept traffic from any source.
+
+
+
+True or False? The "Pending" phase means the Pod was not yet accepted by the Kubernetes cluster so the scheduler can't run it unless it's accepted
+
+False. "Pending" is after the Pod was accepted by the cluster, but the container can't run for different reasons like images not yet downloaded.
+
+
+
+How to list the pods in the current namespace?
+
+`kubectl get po`
+
+
+
+How view all the pods running in all the namespaces?
+
+`kubectl get pods --all-namespaces`
+
+
+
+True or False? A single Pod can be split across multiple nodes
+
+False. A single Pod can run on a single node.
+
+
+
+How to delete a pod?
+
+`kubectl delete pod pod_name`
+
+
+
+How to find out on which node a certain pod is running?
+
+`kubectl get po -o wide`
+
+
+
+What are "Static Pods"?
+
+* Managed directly by Kubelet on specific node
+* API server is not observing static Pods
+* For each static Pod there is a mirror Pod on kubernetes API server but it can't be managed from there
+
+Read more about it [here](https://kubernetes.io/docs/tasks/configure-pod-container/static-pod)
+
+
+
+True or False? A volume defined in Pod can be accessed by all the containers of that Pod
+
+True.
+
+
+
+What happens when you run a Pod?
+
+1. Kubectl sends a request to the API server to create the Pod
+2. The Scheduler detects that there is an unassigned Pod (by monitoring the API server)
+3. The Scheduler chooses a node to assign the Pod to
+4. The Scheduler updates the API server about which node it chose
+5. Kubelet (which also monitors the API server) notices there is a Pod assigned to the same node on which it runs and that Pod isn't running
+6. Kubelet sends request to the container engine (e.g. Docker) to create and run the containers
+7. An update is sent by Kubelet to the API server (notifying it that the Pod is running)
+
+
+
+How to confirm a container is running after running the command kubectl run web --image nginxinc/nginx-unprivileged
+
+* When you run `kubectl describe pods ` it will tell whether the container is running:
+`Status: Running`
+* Run a command inside the container: `kubectl exec web -- ls`
+
+
+
+After running kubectl run database --image mongo
you see the status is "CrashLoopBackOff". What could possibly went wrong and what do you do to confirm?
+
+"CrashLoopBackOff" means the Pod is starting, crashing, starting...and so it repeats itself.
+There are many different reasons to get this error - lack of permissions, init-container misconfiguration, persistent volume connection issue, etc.
+
+One of the ways to check why it happened it to run `kubectl describe po ` and having a look at the exit code
+
+```
+ Last State: Terminated
+ Reason: Error
+ Exit Code: 100
+```
+
+Another way to check what's going on, is to run `kubectl logs `. This will provide us with the logs from the containers running in that Pod.
+
+
+
+Explain the purpose of the following lines
+
+```
+livenessProbe:
+ exec:
+ command:
+ - cat
+ - /appStatus
+ initialDelaySeconds: 10
+ periodSeconds: 5
+```
+
+
+These lines make use of `liveness probe`. It's used to restart a container when it reaches a non-desired state.
+In this case, if the command `cat /appStatus` fails, Kubernetes will kill the container and will apply the restart policy. The `initialDelaySeconds: 10` means that Kubelet will wait 10 seconds before running the command/probe for the first time. From that point on, it will run it every 5 seconds, as defined with `periodSeconds`
+
+
+
+Explain the purpose of the following lines
+
+```
+readinessProbe:
+ tcpSocket:
+ port: 2017
+ initialDelaySeconds: 15
+ periodSeconds: 20
+```
+
+
+They define a readiness probe where the Pod will not be marked as "Ready" before it will be possible to connect to port 2017 of the container. The first check/probe will start after 15 seconds from the moment the container started to run and will continue to run the check/probe every 20 seconds until it will manage to connect to the defined port.
+
+
+
+What does the "ErrImagePull" status of a Pod means?
+
+It wasn't able to pull the image specified for running the container(s). This can happen if the client didn't authenticated for example.
+More details can be obtained with `kubectl describe po `.
+
+
+
+What happens when you delete a Pod?
+
+1. The `TERM` signal is sent to kill the main processes inside the containers of the given Pod
+2. Each container is given a period of 30 seconds to shut down the processes gracefully
+3. If the grace period expires, the `KILL` signal is used to kill the processes forcefully and the containers as well
+
+
+
+Explain liveness probes
+
+Liveness probes is a useful mechanism used for restarting the container when a certain check/probe, the user has defined, fails.
+For example, the user can define that the command `cat /app/status` will run every X seconds and the moment this command fails, the container will be restarted.
+
+You can read more about it in [kubernetes.io](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes)
+
+
+
+Explain readiness probes
+
+readiness probes used by Kubelet to know when a container is ready to start running, accepting traffic.
+For example, a readiness probe can be to connect port 8080 on a container. Once Kubelet manages to connect it, the Pod is marked as ready
+
+You can read more about it in [kubernetes.io](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes)
+
+
+
+How readiness probe status affect Services when they are combined?
+
+Only containers whose state set to Success will be able to receive requests sent to the Service.
+
+
+
+Why it's usually considered better to include one container per Pod?
+
+One reason is that it makes it harder to scale, when you need to scale only one of the containers in a given Pod.
+
+
+#### Kubernetes - Deployments
+
+
+What is a "Deployment" in Kubernetes?
+
+A Kubernetes Deployment is used to tell Kubernetes how to create or modify instances of the pods that hold a containerized application.
+Deployments can scale the number of replica pods, enable rollout of updated code in a controlled manner, or roll back to an earlier deployment version if necessary.
+
+A Deployment is a declarative statement for the desired state for Pods and Replica Sets.
+
+
+
+How to create a deployment?
+
+```
+cat << EOF | kubectl create -f -
+apiVersion: v1
+kind: Pod
+metadata:
+ name: nginx
+spec:
+ containers:
+ - name: nginx
+ image: nginx
+EOF
+```
+
+
+
+How to edit a deployment?
+
+kubectl edit deployment some-deployment
+
+
+
+What happens after you edit a deployment and change the image?
+
+The pod will terminate and another, new pod, will be created.
+
+Also, when looking at the replicaset, you'll see the old replica doesn't have any pods and a new replicaset is created.
+
+
+
+How to delete a deployment?
+
+One way is by specifying the deployment name: `kubectl delete deployment [deployment_name]`
+Another way is using the deployment configuration file: `kubectl delete -f deployment.yaml`
+
+
+
+What happens when you delete a deployment?
+
+The pod related to the deployment will terminate and the replicaset will be removed.
+
+
+
+How make an app accessible on private or external network?
+
+Using a Service.
+
+
+
+An internal load balancer in Kubernetes is called ____
and an external load balancer is called ____
+
+An internal load balancer in Kubernetes is called Service and an external load balancer is Ingress
+
+
+#### Kubernetes - Services
+
+
+What is a Service in Kubernetes?
+
+"An abstract way to expose an application running on a set of Pods as a network service." - read more [here](https://kubernetes.io/docs/concepts/services-networking/service)
+In simpler words, it allows you to add an internal or external connectivity to a certain application running in a container.
+
+
+
+True or False? The lifecycle of Pods and Services isn't connected so when a Pod dies, the Service still stays
+
+True
+
+
+
+What Service types are there?
+
+* ClusterIP
+* NodePort
+* LoadBalancer
+* ExternalName
+
+More on this topic [here](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types)
+
+
+
+How Service and Deployment are connected?
+
+The truth is they aren't connected. Service points to Pod(s) directly, without connecting to the Deployment in any way.
+
+
+
+What are important steps in defining/adding a Service?
+
+1. Making sure that targetPort of the Service is matching the containerPort of the POd
+2. Making sure that selector matches at least one of the Pod's labels
+
+
+
+What is the default service type in Kubernetes and what is it used for?
+
+The default is ClusterIP and it's used for exposing a port internally. It's useful when you want to enable internal communication between Pods and prevent any external access.
+
+
+
+How to get information on a certain service?
+
+`kubctl describe service `
+
+It's more common to use `kubectl describe svc ...`
+
+
+
+What the following command does?
+
+```
+kubectl expose rs some-replicaset --name=replicaset-svc --target-port=2017 --type=NodePort
+```
+
+
+It exposes a ReplicaSet by creating a service called 'replicaset-svc'. The exposed port is 2017 (this is the port used by the application) and the service type is NodePort which means it will be reachable externally.
+
+
+
+True or False? the target port, in the case of running the following command, will be exposed only on one of the Kubernetes cluster nodes but it will routed to all the pods
+
+```
+kubectl expose rs some-replicaset --name=replicaset-svc --target-port=2017 --type=NodePort
+```
+
+
+False. It will be exposed on every node of the cluster and will be routed to one of the Pods (which belong to the ReplicaSet)
+
+
+
+How to verify that a certain service configured to forward the requests to a given pod
+
+Run `kubectl describe service` and see if the IPs from "Endpoints" match any IPs from the output of `kubectl get pod -o wide`
+
+
+
+Explain what will happen when running apply on the following block
+
+```
+apiVersion: v1
+kind: Service
+metadata:
+ name: some-app
+spec:
+ type: NodePort
+ ports:
+ - port: 8080
+ nodePort: 2017
+ protocol: TCP
+ selector:
+ type: backend
+ service: some-app
+```
+
+
+It creates a new Service of the type "NodePort" which means it can be used for internal and external communication with the app.
+The port of the application is 8080 and the requests will forwarded to this port. The exposed port is 2017. As a note, this is not a common practice, to specify the nodePort.
+The port used TCP (instead of UDP) and this is also the default so you don't have to specify it.
+The selector used by the Service to know to which Pods to forward the requests. In this case, Pods with the label "type: backend" and "service: some-app".
+
+
+
+How to turn the following service into an external one?
+
+```
+spec:
+ selector:
+ app: some-app
+ ports:
+ - protocol: TCP
+ port: 8081
+ targetPort: 8081
+```
+
+
+Adding `type: LoadBalancer` and `nodePort`
+
+```
+spec:
+ selector:
+ app: some-app
+ type: LoadBalancer
+ ports:
+ - protocol: TCP
+ port: 8081
+ targetPort: 8081
+ nodePort: 32412
+```
+
+
+
+What would you use to route traffic from outside the Kubernetes cluster to services within a cluster?
+
+Ingress
+
+
+
+True or False? When "NodePort" is used, "ClusterIP" will be created automatically?
+
+True
+
+
+
+When would you use the "LoadBalancer" type
+
+Mostly when you would like to combine it with cloud provider's load balancer
+
+
+
+How would you map a service to an external address?
+
+Using the 'ExternalName' directive.
+
+
+
+Describe in detail what happens when you create a service
+
+1. Kubectl sends a request to the API server to create a Service
+2. The controller detects there is a new Service
+3. Endpoint objects created with the same name as the service, by the controller
+4. The controller is using the Service selector to identify the endpoints
+5. kube-proxy detects there is a new endpoint object + new service and adds iptables rules to capture traffic to the Service port and redirect it to endpoints
+6. kube-dns detects there is a new Service and adds the container record to the dns server
+
+
+
+How to list the endpoints of a certain app?
+
+`kubectl get ep `
+
+
+
+How can you find out information on a Service related to a certain Pod if all you can use is kubectl exec --
+
+You can run `kubectl exec -- env` which will give you a couple environment variables related to the Service.
+Variables such as `[SERVICE_NAME]_SERVICE_HOST`, `[SERVICE_NAME]_SERVICE_PORT`, ...
+
+
+
+Describe what happens when a container tries to connect with its corresponding Service for the first time. Explain who added each of the components you include in your description
+
+ - The container looks at the nameserver defined in /etc/resolv.conf
+ - The container queries the nameserver so the address is resolved to the Service IP
+ - Requests sent to the Service IP are forwarded with iptables rules (or other chosen software) to the endpoint(s).
+
+Explanation as to who added them:
+
+ - The nameserver in the container is added by kubelet during the scheduling of the Pod, by using kube-dns
+ - The DNS record of the service is added by kube-dns during the Service creation
+ - iptables rules are added by kube-proxy during Endpoint and Service creation
+
+
+#### Kubernetes - Ingress
+
+
+What is Ingress?
+
+From Kubernetes docs: "Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource."
+
+Read more [here](https://kubernetes.io/docs/concepts/services-networking/ingress/)
+
+
+
+Complete the following configuration file to make it Ingress
+
+```
+metadata:
+ name: someapp-ingress
+spec:
+```
+
+There are several ways to answer this question.
+
+```
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: someapp-ingress
+spec:
+ rules:
+ - host: my.host
+ http:
+ paths:
+ - backend:
+ serviceName: someapp-internal-service
+ servicePort: 8080
+```
+
+
+
+
+Explain the meaning of "http", "host" and "backend" directives
+
+```
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: someapp-ingress
+spec:
+ rules:
+ - host: my.host
+ http:
+ paths:
+ - backend:
+ serviceName: someapp-internal-service
+ servicePort: 8080
+```
+
+
+host is the entry point of the cluster so basically a valid domain address that maps to cluster's node IP address
+the http line used for specifying that incoming requests will be forwarded to the internal service using http.
+backend is referencing the internal service (serviceName is the name under metadata and servicePort is the port under the ports section).
+
+
+
+Why using a wildcard in ingress host may lead to issues?
+
+The reason you should not wildcard value in a host (like `- host: *`) is because you basically tell your Kubernetes cluster to forward all the traffic to the container where you used this ingress. This may cause the entire cluster to go down.
+
+
+
+What is Ingress Controller?
+
+An implementation for Ingress. It's basically another pod (or set of pods) that does evaluates and processes Ingress rules and this it manages all the redirections.
+
+There are multiple Ingress Controller implementations (the one from Kubernetes is Kubernetes Nginx Ingress Controller).
+
+
+
+What are some use cases for using Ingress?
+
+* Multiple sub-domains (multiple host entries, each with its own service)
+* One domain with multiple services (multiple paths where each one is mapped to a different service/application)
+
+
+
+How to list Ingress in your namespace?
+
+kubectl get ingress
+
+
+
+What is Ingress Default Backend?
+
+It specifies what do with an incoming request to the Kubernetes cluster that isn't mapped to any backend (= no rule to for mapping the request to a service). If the default backend service isn't defined, it's recommended to define so users still see some kind of message instead of nothing or unclear error.
+
+
+
+How to configure a default backend?
+
+Create Service resource that specifies the name of the default backend as reflected in `kubectl desrcibe ingress ...` and the port under the ports section.
+
+
+
+How to configure TLS with Ingress?
+
+Add tls and secretName entries.
+
+```
+spec:
+ tls:
+ - hosts:
+ - some_app.com
+ secretName: someapp-secret-tls
+````
+
+
+
+True or False? When configuring Ingress with TLS, the Secret component must be in the same namespace as the Ingress component
+
+True
+
+
+
+Which Kubernetes concept would you use to control traffic flow at the IP address or port level?
+
+Network Policies
+
+
+
+What the following block of lines does?
+
+```
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ type: backend
+ template:
+ metadata:
+ labels:
+ type: backend
+ spec:
+ containers:
+ - name: httpd-yup
+ image: httpd
+```
+
+
+It defines a replicaset for Pods whose type is set to "backend" so at any given point of time there will be 2 concurrent Pods running.
+
+
+#### Kubernetes - ReplicaSets
+
+
+What is the purpose of ReplicaSet?
+
+[kubernetes.io](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset): "A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods."
+
+In simpler words, a ReplicaSet will ensure the specified number of Pods replicas is running for a selected Pod. If there are more Pods than defined in the ReplicaSet, some will be removed. If there are less than what is defined in the ReplicaSet then, then more replicas will be added.
+
+
+
+What will happen when a Pod, created by ReplicaSet, is deleted directly with kubectl delete po ...
?
+
+The ReplicaSet will create a new Pod in order to reach the desired number of replicas.
+
+
+
+True or False? If a ReplicaSet defines 2 replicas but there 3 Pods running matching the ReplicaSet selector, it will do nothing
+
+False. It will terminate one of the Pods to reach the desired state of 2 replicas.
+
+
+
+Describe the sequence of events in case of creating a ReplicaSet
+
+* The client (e.g. kubectl) sends a request to the API server to create a ReplicaSet
+* The Controller detects there is a new event requesting for a ReplicaSet
+* The controller creates new Pod definitions (the exact number depends on what is defined in the ReplicaSet definition)
+* The scheduler detects unassigned Pods and decides to which nodes to assign the Pods. This information sent to the API server
+* Kubelet detects that two Pods were assigned to the node it's running on (as it constantly watching the API server)
+* Kubelet sends requests to the container engine, to create the containers that are part of the Pod
+* Kubelet sends a request to the API server to notify it the Pods were created
+
+
+
+How to list ReplicaSets in the current namespace?
+
+kubectl get rs
+
+
+
+Is it possible to delete ReplicaSet without deleting the Pods it created?
+
+Yes, with `--cascase=false`.
+
+`kubectl delete -f rs.yaml --cascade=false`
+
+
+
+What is the default number of replicas if not explicitly specified?
+
+1
+
+
+
+What the following output of kubectl get rs
means?
+
+NAME DESIRED CURRENT READY AGE
+web 2 2 0 2m23s
+
+
+The replicaset `web` has 2 replicas. It seems that the containers inside the Pod(s) are not yet running since the value of READY is 0. It might be normal since it takes time for some containers to start running and it might be due to an error. Running `kubectl describe po POD_NAME` or `kubectl logs POD_NAME` can give us more information.
+
+
+
+You run kubectl get rs
and while DESIRED is set to 2, you see that READY is set to 0. What are some possible reasons for it to be 0?
+
+ * Images are still being pulled
+ * There is an error and the containers can't reach the state "Running"
+
+
+
+True or False? Pods specified by the selector field of ReplicaSet must be created by the ReplicaSet itself
+
+False. The Pods can be already running and initially they can be created by any object. It doesn't matter for the ReplicaSet and not a requirement for it to acquire and monitor them.
+
+
+
+True or False? In case of a ReplicaSet, if Pods specified in the selector field don't exists, the ReplicaSet will wait for them to run before doing anything
+
+False. It will take care of running the missing Pods.
+
+
+
+In case of a ReplicaSet, Which field is mandatory in the spec section?
+
+The field `template` in spec section is mandatory. It's used by the ReplicaSet to create new Pods when needed.
+
+
+
+You've created a ReplicaSet, how to check whether the ReplicaSet found matching Pods or it created new Pods?
+
+`kubectl describe rs `
+
+It will be visible under `Events` (the very last lines)
+
+
+
+True or False? Deleting a ReplicaSet will delete the Pods it created
+
+True (and not only the Pods but anything else it created).
+
+
+
+True or False? Removing the label from a Pod that is used by ReplicaSet to match Pods, will cause the ReplicaSet to create a new Pod
+
+True. When the label, used by a ReplicaSet in the selector field, removed from a Pod, that Pod no longer controlled by the ReplicaSet and the ReplicaSet will create a new Pod to compensate for the one it "lost".
+
+
+
+How to scale a deployment to 8 replicas?
+
+kubectl scale deploy --replicas=8
+
+
+
+ReplicaSets are running the moment the user executed the command to create them (like kubectl create -f rs.yaml
)
+
+False. It can take some time, depends on what exactly you are running. To see if they are up and running, run `kubectl get rs` and watch the 'READY' column.
+
+
+
+How to expose a ReplicaSet as a new service?
+
+`kubectl expose rs --name= --target-port= --type=NodePort`
+
+Few notes:
+ - the target port depends on which port the app is using in the container
+ - type can be different and doesn't has to be specifically "NodePort"
+
+
+#### Kubernetes - Storage
+
+
+What is a volume in regards to Kubernetes?
+
+A directory accessible by the containers inside a certain Pod. The mechanism responsible for creating the directory and managing it, ... is mainly depends on the volume type.
+
+
+
+Which problems volumes in Kubernetes solve?
+
+1. Sharing files between containers running in the same Pod
+2. Storage in containers is ephemeral - it usually doesn't last for long. For example, when a container crashes, you lose all on-disk data.
+
+
+
+Explain ephemeral volume types vs. persistent volumes in regards to Pods
+
+Ephemeral volume types have the lifetime of a pod as opposed to persistent volumes which exist beyond the lifetime of a Pod.
+
+
+#### Kubernetes - Network Policies
+
+
+Explain Network Policies
+
+[kubernetes.io](https://kubernetes.io/docs/concepts/services-networking/network-policies): "NetworkPolicies are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network "entities"..."
+
+In simpler words, Network Policies specify how pods are allowed/disallowed to communicate with each other and/or other network endpoints.
+
+
+
+What are some use cases for using Network Policies?
+
+ - Security: You want to prevent from everyone to communicate with a certain pod for security reasons
+ - Controling network traffic: You would like to deny network flow between two specific nodes
+
+
+
+True or False? If no network policies are applied to a pod, then no connections to or from it are allowed
+
+False. By default pods are non-isolated.
+
+
+
+In case of two pods, if there is an egress policy on the source denining traffic and ingress policy on the destination that allows traffic then, traffic will be allowed or denied?
+
+Denied. Both source and destination policies has to allow traffic for it to be allowed.
+
+
+#### Kubernetes - Configuration File
+
+
+Which parts a configuration file has?
+
+It has three main parts:
+1. Metadata
+2. Specification
+3. Status (this automatically generated and added by Kubernetes)
+
+
+
+What is the format of a configuration file?
+
+YAML
+
+
+
+How to get latest configuration of a deployment?
+
+`kubectl get deployment [deployment_name] -o yaml`
+
+
+
+Where Kubernetes cluster stores the cluster state?
+
+etcd
+
+
+#### Kubernetes - etcd
+
+
+What is etcd?
+
+
+
+True or False? Etcd holds the current status of any kubernetes component
+
+True
+
+
+
+True or False? The API server is the only component which communicates directly with etcd
+
+True
+
+
+
+True or False? application data is not stored in etcd
+
+True
+
+
+
+Why etcd? Why not some SQL or NoSQL database?
+
+
+#### Kubernetes - Namespaces
+
+
+What are namespaces?
+
+Namespaces allow you split your cluster into virtual clusters where you can group your applications in a way that makes sense and is completely separated from the other groups (so you can for example create an app with the same name in two different namespaces)
+
+
+
+Why to use namespaces? What is the problem with using one default namespace?
+
+When using the default namespace alone, it becomes hard over time to get an overview of all the applications you manage in your cluster. Namespaces make it easier to organize the applications into groups that makes sense, like a namespace of all the monitoring applications and a namespace for all the security applications, etc.
+
+Namespaces can also be useful for managing Blue/Green environments where each namespace can include a different version of an app and also share resources that are in other namespaces (namespaces like logging, monitoring, etc.).
+
+Another use case for namespaces is one cluster, multiple teams. When multiple teams use the same cluster, they might end up stepping on each others toes. For example if they end up creating an app with the same name it means one of the teams overriden the app of the other team because there can't be too apps in Kubernetes with the same name (in the same namespace).
+
+
+
+True or False? When a namespace is deleted all resources in that namespace are not deleted but moved to another default namespace
+
+False. When a namespace is deleted, the resources in that namespace are deleted as well.
+
+
+
+What special namespaces are there by default when creating a Kubernetes cluster?
+
+* default
+* kube-system
+* kube-public
+* kube-node-lease
+
+
+
+What can you find in kube-system namespace?
+
+* Master and Kubectl processes
+* System processes
+
+
+
+How to list all namespaces?
+
+`kubectl get namespaces`
+
+
+
+What kube-public contains?
+
+* A configmap, which contains cluster information
+* Publicely accessible data
+
+
+
+How to get the name of the current namespace?
+
+kubectl config view | grep namespace
+
+
+
+What kube-node-lease contains?
+
+It holds information on hearbeats of nodes. Each node gets an object which holds information about its availability.
+
+
+
+How to create a namespace?
+
+One way is by running `kubectl create namespace [NAMESPACE_NAME]`
+
+Another way is by using namespace configuration file:
+```
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: some-cofngimap
+ namespace: some-namespace
+```
+
+
+
+What default namespace contains?
+
+Any resource you create while using Kubernetes.
+
+
+
+True or False? With namespaces you can limit the resources consumed by the users/teams
+
+True. With namespaces you can limit CPU, RAM and storage usage.
+
+
+
+How to switch to another namespace? In other words how to change active namespace?
+
+`kubectl config set-context --current --namespace=some-namespace` and validate with `kubectl config view --minify | grep namespace:`
+
+OR
+
+`kubens some-namespace`
+
+
+
+What is Resource Quota?
+
+
+
+How to create a Resource Quota?
+
+kubectl create quota some-quota --hard-cpu=2,pods=2
+
+
+
+Which resources are accessible from different namespaces?
+
+Service.
+
+
+
+Let's say you have three namespaces: x, y and z. In x namespace you have a ConfigMap referencing service in z namespace. Can you reference the ConfigMap in x namespace from y namespace?
+
+No, you would have to create separate namespace in y namespace.
+
+
+
+Which service and in which namespace the following file is referencing?
+
+```
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: some-configmap
+data:
+ some_url: samurai.jack
+```
+
+
+It's referencing the service "samurai" in the namespace called "jack".
+
+
+
+Which components can't be created within a namespace?
+
+Volume and Node.
+
+
+
+How to list all the components that bound to a namespace?
+
+`kubectl api-resources --namespaced=true`
+
+
+
+How to create components in a namespace?
+
+One way is by specifying --namespace like this: `kubectl apply -f my_component.yaml --namespace=some-namespace`
+Another way is by specifying it in the YAML itself:
+
+```
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: some-configmap
+ namespace: some-namespace
+```
+
+and you can verify with: `kubectl get configmap -n some-namespace`
+
+
+
+How to execute the command "ls" in an existing pod?
+
+kubectl exec some-pod -it -- ls
+
+
+
+How to create a service that exposes a deployment?
+
+kubectl expose deploy some-deployment --port=80 --target-port=8080
+
+
+
+How to create a pod and a service with one command?
+
+kubectl run nginx --image=nginx --restart=Never --port 80 --expose
+
+
+
+Describe in detail what the following command does kubectl create deployment kubernetes-httpd --image=httpd
+
+
+
+Why to create kind deployment, if pods can be launched with replicaset?
+
+
+
+How to get list of resources which are not in a namespace?
+
+kubectl api-resources --namespaced=false
+
+
+
+How to delete all pods whose status is not "Running"?
+
+kubectl delete pods --field-selector=status.phase!='Running'
+
+
+
+What kubectl logs [pod-name]
command does?
+
+
+
+What kubectl describe pod [pod name] does?
command does?
+
+
+
+How to display the resources usages of pods?
+
+kubectl top pod
+
+
+
+What kubectl get componentstatus
does?
+
+Outputs the status of each of the control plane components.
+
+
+
+What is Minikube?
+
+Minikube is a lightweight Kubernetes implementation. It create a local virtual machine and deploys a simple (single node) cluster.
+
+
+
+How do you monitor your Kubernetes?
+
+
+
+You suspect one of the pods is having issues, what do you do?
+
+Start by inspecting the pods status. we can use the command `kubectl get pods` (--all-namespaces for pods in system namespace)
+
+If we see "Error" status, we can keep debugging by running the command `kubectl describe pod [name]`. In case we still don't see anything useful we can try stern for log tailing.
+
+In case we find out there was a temporary issue with the pod or the system, we can try restarting the pod with the following `kubectl scale deployment [name] --replicas=0`
+
+Setting the replicas to 0 will shut down the process. Now start it with `kubectl scale deployment [name] --replicas=1`
+
+
+
+What the Kubernetes Scheduler does?
+
+
+
+What happens to running pods if if you stop Kubelet on the worker nodes?
+
+
+
+What happens what pods are using too much memory? (more than its limit)
+
+They become candidates to for termination.
+
+
+
+Describe how roll-back works
+
+
+
+True or False? Memory is a compressible resource, meaning that when a container reach the memory limit, it will keep running
+
+False. CPU is a compressible resource while memory is a non compressible resource - once a container reached the memory limit, it will be terminated.
+
+
+
+What is the control loop? How it works?
+
+Explained [here](https://www.youtube.com/watch?v=i9V4oCa5f9I)
+
+
+#### Kubernetes - Operators
+
+
+What is an Operator?
+
+Explained [here](https://kubernetes.io/docs/concepts/extend-kubernetes/operator)
+
+"Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. Operators follow Kubernetes principles, notably the control loop."
+
+
+
+Why do we need Operators?
+
+The process of managing stateful applications in Kubernetes isn't as straightforward as managing stateless applications where reaching the desired status and upgrades are both handled the same way for every replica. In stateful applications, upgrading each replica might require different handling due to the stateful nature of the app, each replica might be in a different status. As a result, we often need a human operator to manage stateful applications. Kubernetes Operator is suppose to assist with this.
+
+This also help with automating a standard process on multiple Kubernetes clusters
+
+
+
+What components the Operator consists of?
+
+1. CRD (custom resource definition)
+2. Controller - Custom control loop which runs against the CRD
+
+
+
+How Operator works?
+
+It uses the control loop used by Kubernetes in general. It watches for changes in the application state. The difference is that is uses a custom control loop.
+In additions.
+
+In addition, it also makes use of CRD's (Custom Resources Definitions) so basically it extends Kubernetes API.
+
+
+
+True or False? Kubernetes Operator used for stateful applications
+
+True
+
+
+
+What is the Operator Framework?
+
+open source toolkit used to manage k8s native applications, called operators, in an automated and efficient way.
+
+
+
+What components the Operator Framework consists of?
+
+1. Operator SDK - allows developers to build operators
+2. Operator Lifecycle Manager - helps to install, update and generally manage the lifecycle of all operators
+3. Operator Metering - Enables usage reporting for operators that provide specialized services
+
+
+
+Describe in detail what is the Operator Lifecycle Manager
+
+It's part of the Operator Framework, used for managing the lifecycle of operators. It basically extends Kubernetes so a user can use a declarative way to manage operators (installation, upgrade, ...).
+
+
+
+What openshift-operator-lifecycle-manager namespace includes?
+
+It includes:
+
+ * catalog-operator - Resolving and installing ClusterServiceVersions the resource they specify.
+ * olm-operator - Deploys applications defined by ClusterServiceVersion resource
+
+
+
+What is kubconfig? What do you use it for?
+
+
+
+Can you use a Deployment for stateful applications?
+
+
+
+Explain StatefulSet
+
+
+#### Kubernetes - Secrets
+
+
+Explain Kubernetes Secrets
+
+Secrets let you store and manage sensitive information (passwords, ssh keys, etc.)
+
+
+
+How to create a Secret from a key and value?
+
+kubectl create secret generic some-secret --from-literal=password='donttellmypassword'
+
+
+
+How to create a Secret from a file?
+
+kubectl create secret generic some-secret --from-file=/some/file.txt
+
+
+
+What type: Opaque
in a secret file means? What other types are there?
+
+Opaque is the default type used for key-value pairs.
+
+
+
+True or False? storing data in a Secret component makes it automatically secured
+
+False. Some known security mechanisms like "encryption" aren't enabled by default.
+
+
+
+What is the problem with the following Secret file:
+
+```
+apiVersion: v1
+kind: Secret
+metadata:
+ name: some-secret
+type: Opaque
+data:
+ password: mySecretPassword
+```
+
+Password isn't encrypted.
+You should run something like this: `echo -n 'mySecretPassword' | base64` and paste the result to the file instead of using plain-text.
+
+
+
+How to create a Secret from a configuration file?
+
+`kubectl apply -f some-secret.yaml`
+
+
+
+What the following in Deployment configuration file means?
+
+```
+spec:
+ containers:
+ - name: USER_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: some-secret
+ key: password
+```
+
+USER_PASSWORD environment variable will store the value from password key in the secret called "some-secret"
+In other words, you reference a value from a Kubernetes Secret.
+
+
+#### Kubernetes - Volumes
+
+
+True or False? Kubernetes provides data persistence out of the box, so when you restart a pod, data is saved
+
+False
+
+
+
+Explain "Persistent Volumes". Why do we need it?
+
+Persistent Volumes allow us to save data so basically they provide storage that doesn't depend on the pod lifecycle.
+
+
+
+True or False? Persistent Volume must be available to all nodes because the pod can restart on any of them
+
+True
+
+
+
+What types of persistent volumes are there?
+
+* NFS
+* iSCSI
+* CephFS
+* ...
+
+
+
+What is PersistentVolumeClaim?
+
+
+
+Explain Volume Snapshots
+
+
+
+True or False? Kubernetes manages data persistence
+
+False
+
+
+
+Explain Storage Classes
+
+
+
+Explain "Dynamic Provisioning" and "Static Provisioning"
+
+
+
+Explain Access Modes
+
+
+
+What is CSI Volume Cloning?
+
+
+
+Explain "Ephemeral Volumes"
+
+
+
+What types of ephemeral volumes Kubernetes supports?
+
+
+
+What is Reclaim Policy?
+
+
+
+What reclaim policies are there?
+
+* Retain
+* Recycle
+* Delete
+
+
+#### Kubernetes - Access Control
+
+
+What is RBAC?
+
+
+
+Explain the Role
and RoleBinding"
objects
+
+
+
+What is the difference between Role
and ClusterRole
objects?
+
+
+
+Explain what are "Service Accounts" and in which scenario would use create/use one
+
+[Kubernetes.io](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account): "A service account provides an identity for processes that run in a Pod."
+
+An example of when to use one:
+You define a pipeline that needs to build and push an image. In order to have sufficient permissions to build an push an image, that pipeline would require a service account with sufficient permissions.
+
+
+
+What happens you create a pod and you DON'T specify a service account?
+
+The pod is automatically assigned with the default service account (in the namespace where the pod is running).
+
+
+
+Explain how Service Accounts are different from User Accounts
+
+ - User accounts are global while Service accounts unique per namespace
+ - User accounts are meant for humans or client processes while Service accounts are for processes which run in pods
+
+
+
+How to list Service Accounts?
+
+`kubectl get serviceaccounts`
+
+
+
+Explain "Security Context"
+
+[kubernetes.io](https://kubernetes.io/docs/tasks/configure-pod-container/security-context): "A security context defines privilege and access control settings for a Pod or Container."
+
+
+#### Kubernetes - Patterns
+
+
+Explain the sidecar container pattern
+
+
+#### Kubernetes - CronJob
+
+
+Explain what is CronJob and what is it used for
+
+
+
+What possible issue can arise from using the following spec and how to fix it?
+
+```
+apiVersion: batch/v1beta1
+kind: CronJob
+metadata:
+ name: some-cron-job
+spec:
+ schedule: '*/1 * * * *'
+ startingDeadlineSeconds: 10
+ concurrencyPolicy: Allow
+```
+
+
+If the cron job fails, the next job will not replace the previous one due to the "concurrencyPolicy" value which is "Allow". It will keep spawning new jobs and so eventually the system will be filled with failed cron jobs.
+To avoid such problem, the "concurrencyPolicy" value should be either "Replace" or "Forbid".
+
+
+
+What issue might arise from using the following CronJob and how to fix it?
+
+```
+apiVersion: batch/v1beta1
+kind: CronJob
+metadata:
+ name: "some-cron-job"
+spec:
+ schedule: '*/1 * * * *'
+jobTemplate:
+ spec:
+ template:
+ spec:
+ restartPolicy: Never
+ concurrencyPolicy: Forbid
+ successfulJobsHistoryLimit: 1
+ failedJobsHistoryLimit: 1
+```
+
+
+The following lines placed under the template:
+
+```
+concurrencyPolicy: Forbid
+successfulJobsHistoryLimit: 1
+failedJobsHistoryLimit: 1
+```
+
+As a result this configuration isn't part of the cron job spec hence the cron job has no limits which can cause issues like OOM and potentially lead to API server being down.
+To fix it, these lines should placed in the spec of the cron job, above or under the "schedule" directive in the above example.
+
+
+#### Kubernetes - Misc
+
+
+Explain Imperative Management vs. Declarative Management
+
+
+
+Explain what Kubernetes Service Discovery means
+
+
+
+You have one Kubernetes cluster and multiple teams that would like to use it. You would like to limit the resources each team consumes in the cluster. Which Kubernetes concept would you use for that?
+
+Namespaces will allow to limit resources and also make sure there are no collisions between teams when working in the cluster (like creating an app with the same name).
+
+
+
+What Kube Proxy does?
+
+
+
+What "Resources Quotas" are used for and how?
+
+
+
+Explain ConfigMap
+
+Separate configuration from pods.
+It's good for cases where you might need to change configuration at some point but you don't want to restart the application or rebuild the image so you create a ConfigMap and connect it to a pod but externally to the pod.
+
+Overall it's good for:
+* Sharing the same configuration between different pods
+* Storing external to the pod configuration
+
+
+
+How to use ConfigMaps?
+
+1. Create it (from key&value, a file or an env file)
+2. Attach it. Mount a configmap as a volume
+
+
+
+True or False? Sensitive data, like credentials, should be stored in a ConfigMap
+
+False. Use secret.
+
+
+
+Explain "Horizontal Pod Autoscaler"
+
+Scale the number of pods automatically on observed CPU utilization.
+
+
+
+When you delete a pod, is it deleted instantly? (a moment after running the command)
+
+
+
+What does being cloud-native mean?
+
+
+
+Explain the pet and cattle approach of infrastructure with respect to kubernetes
+
+
+
+Describe how you one proceeds to run a containerised web app in K8s, which should be reachable from a public URL.
+
+
+
+How would you troubleshoot your cluster if some applications are not reachable any more?
+
+
+
+Describe what CustomResourceDefinitions there are in the Kubernetes world? What they can be used for?
+
+
+
+ How does scheduling work in kubernetes?
+
+The control plane component kube-scheduler asks the following questions,
+1. What to schedule? It tries to understand the pod-definition specifications
+2. Which node to schedule? It tries to determine the best node with available resources to spin a pod
+3. Binds the Pod to a given node
+
+View more [here](https://www.youtube.com/watch?v=rDCWxkvPlAw)
+
+
+
+ How are labels and selectors used?
+
+
+
+What QoS classes are there?
+
+* Guaranteed
+* Burstable
+* BestEffort
+
+
+
+Explain Labels. What are they and why would one use them?
+
+
+
+Explain Selectors
+
+
+
+What is Kubeconfig?
+
+
+#### Kubernetes - Gatekeeper
+
+
+What is Gatekeeper?
+
+[Gatekeeper docs](https://open-policy-agent.github.io/gatekeeper/website/docs): "Gatekeeper is a validating (mutating TBA) webhook that enforces CRD-based policies executed by Open Policy Agent"
+
+
+
+Explain how Gatekeeper works
+
+On every request sent to the Kubernetes cluster, Gatekeeper sends the policies and the resources to OPA (Open Policy Agent) to check if it violates any policy. If it does, Gatekeeper will return the policy error message back. If it isn't violates any policy, the request will reach the cluster.
+
+
+#### Kubernetes - Policy Testing
+
+
+What is Conftest?
+
+Conftest allows you to write tests against structured files. You can think of it as tests library for Kubernetes resources.
+It is mostly used in testing environments such as CI pipelines or local hooks.
+
+
+
+What is Datree? How is it different from Conftest?
+
+Same as Conftest, it is used for policy testing and enforcement. The difference is that it comes with built-in policies.
+
+
+#### Kubernetes - Helm
+
+
+What is Helm?
+
+Package manager for Kubernetes. Basically the ability to package YAML files and distribute them to other users and apply them in different clusters.
+
+
+
+Why do we need Helm? What would be the use case for using it?
+
+Sometimes when you would like to deploy a certain application to your cluster, you need to create multiple YAML files/components like: Secret, Service, ConfigMap, etc. This can be tedious task. So it would make sense to ease the process by introducing something that will allow us to share these bundle of YAMLs every time we would like to add an application to our cluster. This something is called Helm.
+
+A common scenario is having multiple Kubernetes clusters (prod, dev, staging). Instead of individually applying different YAMLs in each cluster, it makes more sense to create one Chart and install it in every cluster.
+
+
+
+Explain "Helm Charts"
+
+Helm Charts is a bundle of YAML files. A bundle that you can consume from repositories or create your own and publish it to the repositories.
+
+
+
+It is said that Helm is also Templating Engine. What does it mean?
+
+It is useful for scenarios where you have multiple applications and all are similar, so there are minor differences in their configuration files and most values are the same. With Helm you can define a common blueprint for all of them and the values that are not fixed and change can be placeholders. This is called a template file and it looks similar to the following
+
+```
+apiVersion: v1
+kind: Pod
+metadata:
+ name: {[ .Values.name ]}
+spec:
+ containers:
+ - name: {{ .Values.container.name }}
+ image: {{ .Values.container.image }}
+ port: {{ .Values.container.port }}
+```
+
+The values themselves will in separate file:
+
+```
+name: some-app
+container:
+ name: some-app-container
+ image: some-app-image
+ port: 1991
+```
+
+
+
+What are some use cases for using Helm template file?
+
+* Deploy the same application across multiple different environments
+* CI/CD
+
+
+
+Explain the Helm Chart Directory Structure
+
+someChart/ -> the name of the chart
+ Chart.yaml -> meta information on the chart
+ values.yaml -> values for template files
+ charts/ -> chart dependencies
+ templates/ -> templates files :)
+
+
+
+How do you search for charts?
+
+`helm search hub [some_keyword]`
+
+
+
+Is it possible to override values in values.yaml file when installing a chart?
+Yes. You can pass another values file:
+`helm install --values=override-values.yaml [CHART_NAME]`
+
+Or directly on the command line: `helm install --set some_key=some_value`
+
+
+
+How Helm supports release management?
+
+Helm allows you to upgrade, remove and rollback to previous versions of charts. In version 2 of Helm it was with what is known as "Tiller". In version 3, it was removed due to security concerns.
+
+
+#### Kubernetes - Security
+
+
+What security best practices do you follow in regards to the Kubernetes cluster?
+
+ * Secure inter-service communication (one way is to use Istio to provide mutual TLS)
+ * Isolate different resources into separate namespaces based on some logical groups
+ * Use supported container runtime (if you use Docker then drop it because it's deprecated. You might want to CRI-O as an engine and podman for CLI)
+ * Test properly changes to the cluster (e.g. consider using Datree to prevent kubernetes misconfigurations)
+ * Limit who can do what (by using for example OPA gatekeeper) in the cluster
+ * Use NetworkPolicy to apply network security
+ * Consider using tools (e.g. Falco) for monitoring threats
+
+
+#### Kubernetes - Troubleshooting Scenarios
+
+
+Running kubectl get pods
you see Pods in "Pending" status. What would you do?
+
+One possible path is to run `kubectl describe pod ` to get more details.
+You might see one of the following:
+ * Cluster is full. In this case, extend the cluster.
+ * ResourcesQuota limits are met. In this case you might want to modify them
+ * Check if PersistentVolumeClaim mount is pending
+
+If none of the above helped, run the command (`get pods`) with `-o wide` to see if the node is assigned to a node. If not, there might be an issue with scheduler.
+
+
+
+Users unable to reach an application running on a Pod on Kubernetes. What might be the issue and how to check?
+
+One possible path is to start with checking the Pod status.
+1. Is the Pod pending? if yes, check for the reason with `kubectl describe pod `
+TODO: finish this...
+
+
+#### Kubernetes - Submariner
+
+
+Explain what is Submariner and what is it used for
+
+"Submariner enables direct networking between pods and services in different Kubernetes clusters, either on premise or in the cloud."
+
+You can learn more [here](https://submariner-io.github.io)
+
+
+
+What each of the following components does?:
+
+ * Lighthouse
+ * Broker
+ * Gateway Engine
+ * Route Agent
+
+
+#### Kubernetes - Istio
+
+
+What is Istio? What is it used for?
+
diff --git a/exercises/terraform/README.md b/exercises/terraform/README.md
new file mode 100644
index 0000000..84d3151
--- /dev/null
+++ b/exercises/terraform/README.md
@@ -0,0 +1,437 @@
+## Terraform
+
+
+Explain what Terraform is and how does it works
+
+[Terraform.io](https://www.terraform.io/intro/index.html#what-is-terraform-): "Terraform is an infrastructure as code (IaC) tool that allows you to build, change, and version infrastructure safely and efficiently."
+
+
+
+Why one would prefer using Terraform and not other technologies? (e.g. Ansible, Puppet, CloudFormation)
+
+A common *wrong* answer is to say that Ansible and Puppet are configuration management tools
+and Terraform is a provisioning tool. While technically true, it doesn't mean Ansible and Puppet can't
+be used for provisioning infrastructure. Also, it doesn't explain why Terraform should be used over
+CloudFormation if at all.
+
+The benefits of Terraform over the other tools:
+
+ * It follows the immutable infrastructure approach which has benefits like avoiding a configuration drift over time
+ * Ansible and Puppet are more procedural (you mention what to execute in each step) and Terraform is declarative since you describe the overall desired state and not per resource or task. You can give the example of going from 1 to 2 servers in each tool. In Terraform you specify 2, in Ansible and puppet you have to only provision 1 additional server so you need to explicitly make sure you provision only another one server.
+
+
+
+How do you structure your Terraform projects?
+
+terraform_directory
+ providers.tf -> List providers (source, version, etc.)
+ variables.tf -> any variable used in other files such as main.tf
+ main.tf -> Lists the resources
+
+
+
+True or False? Terraform follows the mutable infrastructure paradigm
+
+False. Terraform follows immutable infrastructure paradigm.
+
+
+
+True or False? Terraform uses declarative style to describe the expected end state
+True
+
+
+
+What is HCL?
+HCL stands for Hashicorp Configuration Language. It is the language Hashicorp made to use as the configuration language for a number of its tools, including terraform.
+
+
+
+Explain what is "Terraform configuration"
+
+A configuration is a root module along with a tree of child modules that are called as dependencies from the root module.
+
+
+
+Explain what the following commands do:
+
+ * terraform init
+ * terraform plan
+ * terraform validate
+ * terraform apply
+
+
+terraform init
scans your code to figure which providers are you using and download them.
+terraform plan
will let you see what terraform is about to do before actually doing it.
+terraform validate
checks if configuration is syntactically valid and internally consistent within a directory.
+terraform apply
will provision the resources specified in the .tf files.
+
+
+#### Terraform - Resources
+
+
+What is a "resource"?
+
+HashiCorp: "Terraform uses resource blocks to manage infrastructure, such as virtual networks, compute instances, or higher-level components such as DNS records. Resource blocks represent one or more infrastructure objects in your Terraform configuration."
+
+
+
+Explain each part of the following line: `resource "aws_instance" "web_server" {...}`
+
+ - resource: keyword for defining a resource
+ - "aws_instance": the type of the resource
+ - "web_server": the name of the resource
+
+
+
+What is the ID of the following resource: `resource "aws_instance" "web_server" {...}`
+
+`aws_instance.web_server`
+
+
+
+True or False? Resource ID must be unique within a workspace
+
+True
+
+
+
+Explain each of the following in regards to resources
+
+ * Arguments
+ * Attributes
+ * Meta-arguments
+
+ - Arguments: resource specific configurations
+ - Attributes: values exposed by the resource in a form of `resource_type.resource_name.attribute_name`. They are set by the provider or API usually.
+ - Meta-arguments: Functions of Terraform to change resource's behaviour
+
+
+#### Terraform - Providers
+
+
+Explain what is a "provider"
+
+[terraform.io](https://www.terraform.io/docs/language/providers/index.html): "Terraform relies on plugins called "providers" to interact with cloud providers, SaaS providers, and other APIs...Each provider adds a set of resource types and/or data sources that Terraform can manage. Every resource type is implemented by a provider; without providers, Terraform can't manage any kind of infrastructure."
+
+
+
+What is the name of the provider in this case: `resource "libvirt_domain" "instance" {...}`
+
+libvirt
+
+
+#### Terraform - Variables
+
+
+What are Input Variables in Terraform? Why one should use them?
+
+Input variables serve as parameters to the module in Terraform. They allow you for example to define once the value of a variable and use that variable in different places in the module so next time you would want to change the value, you will change it in one place instead of changing the value in different places in the module.
+
+
+
+How to define variables?
+
+```
+variable "app_id" {
+ type = string
+ description = "The id of application"
+ default = "some_value"
+}
+```
+
+Usually they are defined in their own file (vars.tf for example).
+
+
+
+How variables are used in modules?
+
+They are referenced with `var.VARIABLE_NAME`
+
+vars.tf:
+
+```
+variable "memory" {
+ type = string
+ default "8192"
+}
+
+variable "cpu" {
+ type = string
+ default = "4"
+}
+```
+
+main.tf:
+
+```
+resource "libvirt_domain" "vm1" {
+ name = "vm1"
+ memory = var.memory
+ cpu = var.cpu
+}
+```
+
+
+
+How would you enforce users that use your variables to provide values with certain constraints? For example, a number greater than 1
+
+Using `validation` block
+
+```
+variable "some_var" {
+ type = number
+
+ validation {
+ condition = var.some_var > 1
+ error_message = "you have to specify a number greater than 1"
+ }
+
+}
+```
+
+
+
+What is the effect of setting variable as "sensitive"?
+
+It doesn't show its value when you run `terraform apply` or `terraform plan` but eventually it's still recorded in the state file.
+
+
+
+True or Fales? If an expression's result depends on a sensitive variable, it will be treated as sensitive as well
+
+True
+
+
+
+The same variable is defined in the following places:
+
+ - The file `terraform.tfvars`
+ - Environment variable
+ - Using `-var` or `-var-file`
+
+According to varaible precedence, which source will be used first?
+
+The order is:
+
+ - Environment variable
+ - The file `terraform.tfvars`
+ - Using `-var` or `-var-file`
+
+
+
+What other way is there to define lots of variables in more "simplified" way?
+
+Using `.tfvars` file which contains variable consists of simple variable names assignments this way:
+
+```
+x = 2
+y = "mario"
+z = "luigi"
+```
+
+
+#### Terraform - State
+
+
+What terraform.tfstate
file is used for?
+
+It keeps track of the IDs of created resources so that Terraform knows what it's managing.
+
+
+
+How do you rename an existing resource?
+
+terraform state mv
+
+
+
+Why does it matter where you store the tfstate file? Where would you store it?
+
+ - tfstate contains credentials in plain text. You don't want to put it in publicly shared location
+ - tfstate shouldn't be modified concurrently so putting it in a shared location available for everyone with "write" permissions might lead to issues. (Terraform remote state doesn't has this problem).
+ - tfstate is in important file. As such, it might be better to put it in a location that has regular backups.
+
+As such, tfstate shouldn't be stored in git repositories. secured storage such as secured buckets, is a better option.
+
+
+
+Which command is responsible for creating state file?
+
+ - terraform apply file.terraform
+ - Above command will create tfstate file in the working folder.
+
+
+
+By default where does the state get stored?
+
+ - The state is stored by default in a local file named terraform.tfstate.
+
+
+
+Can we store tfstate file at remote location? If yes, then in which condition you will do this?
+
+ - Yes, It can also be stored remotely, which works better in a team environment. Given condition that remote location is not publicly accessible since tfstate file contain sensitive information as well. Access to this remote location must be only shared with team members.
+
+
+
+Mention some best practices related to tfstate
+
+ - Don't edit it manually. tfstate was designed to be manipulated by terraform and not by users directly.
+ - Store it in secured location (since it can include credentials and sensitive data in general)
+ - Backup it regularly so you can roll-back easily when needed
+ - Store it in remote shared storage. This is especially needed when working in a team and the state can be updated by any of the team members
+ - Enabled versioning if the storage where you store the state file, supports it. Versioning is great for backups and roll-backs in case of an issue.
+
+
+
+How and why concurrent edits of the state file should be avoided?
+
+If there are two users or processes concurrently editing the state file it can result in invalid state file that doesn't actually represents the state of resources.
+
+To avoid that, Terraform can apply state locking if the backend supports that. For example, AWS s3 supports state locking and consistency via DynamoDB. Often, if the backend support it, Terraform will make use of state locking automatically so nothing is required from the user to activate it.
+
+
+
+Describe how you manage state file(s) when you have multiple environments (e.g. development, staging and production)
+
+There is no right or wrong here, but it seems that the overall preferred way is to have a dedicated state file per environment.
+
+
+
+How to write down a variable which changes by an external source or during terraform apply
?
+
+You use it this way: variable “my_var” {}
+
+
+
+You've deployed a virtual machine with Terraform and you would like to pass data to it (or execute some commands). Which concept of Terraform would you use?
+
+[Provisioners](https://www.terraform.io/docs/language/resources/provisioners)
+
+
+#### Terraform - Provisioners
+
+
+What are "Provisioners"? What they are used for?
+
+Provisioners used to execute actions on local or remote machine. It's extremely useful in case you provisioned an instance and you want to make a couple of changes in the machine you've created without manually ssh into it after Terraform finished to run and manually run them.
+
+
+
+What is local-exec
and remote-exec
in the context of provisioners?
+
+
+
+What is a "tainted resource"?
+
+It's a resource which was successfully created but failed during provisioning. Terraform will fail and mark this resource as "tainted".
+
+
+
+What terraform taint
does?
+terraform taint resource.id
manually marks the resource as tainted in the state file. So when you run terraform apply
the next time, the resource will be destroyed and recreated.
+
+
+
+What types of variables are supported in Terraform?
+
+string
+number
+bool
+list()
+set()
+map()
+object({ = , ... })
+tuple([, ...])
+
+
+
+What is a data source? In what scenarios for example would need to use it?
+Data sources lookup or compute values that can be used elsewhere in terraform configuration.
+
+There are quite a few cases you might need to use them:
+* you want to reference resources not managed through terraform
+* you want to reference resources managed by a different terraform module
+* you want to cleanly compute a value with typechecking, such as with aws_iam_policy_document
+
+
+
+What are output variables and what terraform output
does?
+Output variables are named values that are sourced from the attributes of a module. They are stored in terraform state, and can be used by other modules through remote_state
+
+
+
+Explain Modules
+
+A Terraform module is a set of Terraform configuration files in a single directory. Modules are small, reusable Terraform configurations that let you manage a group of related resources as if they were a single resource. Even a simple configuration consisting of a single directory with one or more .tf files is a module. When you run Terraform commands directly from such a directory, it is considered the root module. So in this sense, every Terraform configuration is part of a module.
+
+
+
+What is the Terraform Registry?
+
+The Terraform Registry provides a centralized location for official and community-managed providers and modules.
+
+
+
+Explain remote-exec
and local-exec
+
+
+
+
+Explain "Remote State". When would you use it and how?
+ Terraform generates a `terraform.tfstate` json file that describes components/service provisioned on the specified provider. Remote
+ State stores this file in a remote storage media to enable collaboration amongst team.
+
+
+
+Explain "State Locking"
+ State locking is a mechanism that blocks an operations against a specific state file from multiple callers so as to avoid conflicting operations from different team members. Once the first caller's operation's lock is released the other team member may go ahead to
+ carryout his own operation. Nevertheless Terraform will first check the state file to see if the desired resource already exist and
+ if not it goes ahead to create it.
+
+
+
+What is the "Random" provider? What is it used for
+ The random provider aids in generating numeric or alphabetic characters to use as a prefix or suffix for a desired named identifier.
+
+
+
+How do you test a terraform module?
+ Many examples are acceptable, but the most common answer would likely to be using the tool terratest
, and to test that a module can be initialized, can create resources, and can destroy those resources cleanly.
+
+
+
+Aside from .tfvars
files or CLI arguments, how can you inject dependencies from other modules?
+ The built-in terraform way would be to use remote-state
to lookup the outputs from other modules.
+ It is also common in the community to use a tool called terragrunt
to explicitly inject variables between modules.
+
+
+
+What is Terraform import?
+
+Terraform import is used to import existing infrastucture. It allows you to bring resources created by some other means (eg. manually launched cloud resources) and bring it under Terraform management.
+
+
+
+How do you import existing resource using Terraform import?
+
+1. Identify which resource you want to import.
+2. Write terraform code matching configuration of that resource.
+3. Run terraform command terraform import RESOURCE ID
+
+eg. Let's say you want to import an aws instance. Then you'll perform following:
+1. Identify that aws instance in console
+2. Refer to it's configuration and write Terraform code which will look something like:
+```
+resource "aws_instance" "tf_aws_instance" {
+ ami = data.aws_ami.ubuntu.id
+ instance_type = "t3.micro"
+
+ tags = {
+ Name = "import-me"
+ }
+}
+```
+3. Run terraform command terraform import aws_instance.tf_aws_instance i-12345678
+
+
diff --git a/scripts/count_questions.sh b/scripts/count_questions.sh
index c627b89..909c2d1 100755
--- a/scripts/count_questions.sh
+++ b/scripts/count_questions.sh
@@ -1,5 +1,3 @@
-#!/bin/bash
+#!/usr/bin/env bash
-# We dont care about non alphanumerics filenames so we just ls | grep to shorten the script.
-
-echo $(( $(grep \ -c README.md) + $(grep -i Solution README.md | grep \.md -c) ))
+echo $(( $(grep -E "\[Exercise\]|" -c README.md exercises/*/README.md | awk -F: '{ s+=$2 } END { print s }' )))