certificates | ||
coding/python | ||
exercises | ||
images | ||
scripts | ||
tests | ||
.gitignore | ||
.travis.yml | ||
common-qa.md | ||
CONTRIBUTING.md | ||
credits.md | ||
LICENSE | ||
prepare_for_interview.md | ||
README-zh_CN.md | ||
README.md |
ℹ️ This repo contains questions and exercises on various technical topics, sometimes related to DevOps and SRE :)
📊 There are currently 1899 questions
📚 To learn more about DevOps and SRE, check the resources in devops-resources repository
⚠️ You can use these for preparing for an interview but most of the questions and exercises don't represent an actual interview. Please read FAQ page for more details
👥 Join our DevOps community where we have discussions and share resources on DevOps
📝 You can add more questions and exercises by submitting pull requests :) Read about contribution guidelines here
DevOps
What is DevOps?
You can answer it by describing what DevOps means to you and/or rely on how companies define it. I've put here a couple of examples.
Amazon:
"DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market."
Microsoft:
"DevOps is the union of people, process, and products to enable continuous delivery of value to our end users. The contraction of “Dev” and “Ops” refers to replacing siloed Development and Operations to create multidisciplinary teams that now work together with shared and efficient practices and tools. Essential DevOps practices include agile planning, continuous integration, continuous delivery, and monitoring of applications."
Red Hat:
"DevOps describes approaches to speeding up the processes by which an idea (like a new software feature, a request for enhancement, or a bug fix) goes from development to deployment in a production environment where it can provide value to the user. These approaches require that development teams and operations teams communicate frequently and approach their work with empathy for their teammates. Scalability and flexible provisioning are also necessary. With DevOps, those that need power the most, get it—through self service and automation. Developers, usually coding in a standard development environment, work closely with IT operations to speed software builds, tests, and releases—without sacrificing reliability."
Google:
"...The organizational and cultural movement that aims to increase software delivery velocity, improve service reliability, and build shared ownership among software stakeholders"
What are the benefits of DevOps? What can it help us to achieve?
- Collaboration
- Improved delivery
- Security
- Speed
- Scale
- Reliability
What are the anti-patterns of DevOps?
A couple of examples:
- One person is in charge of specific tasks. For example there is only one person who is allowed to merge the code of everyone else into the repository.
- Treating production differently from development environment. For example, not implementing security in development environment
- Not allowing someone to push to production on Friday ;)
How would you describe a successful DevOps engineer or a team?
The answer can focus on:
- Collaboration
- Communication
- Set up and improve workflows and processes (related to testing, delivery, ...)
- Dealing with issues
Things to think about:
- What DevOps teams or engineers should NOT focus on or do?
- Do DevOps teams or engineers have to be innovative or practice innovation as part of their role?
Tooling
What are you taking into consideration when choosing a tool/technology?
A few ideas to think about:
- mature/stable vs. cutting edge
- community size
- architecture aspects - agent vs. agentless, master vs. masterless, etc.
- learning curve
Can you describe which tool or platform you chose to use in some of the following areas and how?
- CI/CD
- Provisioning infrastructure
- Configuration Management
- Monitoring & alerting
- Logging
- Code review
- Code coverage
- Issue Tracking
- Containers and Containers Orchestration
- Tests
This is a more practical version of the previous question where you might be asked additional specific questions on the technology you chose
- CI/CD - Jenkins, Circle CI, Travis, Drone, Argo CD, Zuul
- Provisioning infrastructure - Terraform, CloudFormation
- Configuration Management - Ansible, Puppet, Chef
- Monitoring & alerting - Prometheus, Nagios
- Logging - Logstash, Graylog, Fluentd
- Code review - Gerrit, Review Board
- Code coverage - Cobertura, Clover, JaCoCo
- Issue tracking - Jira, Bugzilla
- Containers and Containers Orchestration - Docker, Podman, Kubernetes, Nomad
- Tests - Robot, Serenity, Gauge
A team member of yours, suggests to replace the current CI/CD platform used by the organization with a new one. How would you reply?
Things to think about:
- What we gain from doing so? Are there new features in the new platform? Does the new platform deals with some of the limitations presented in the current platform?
- What this suggestion is based on? In other words, did he/she tried out the new platform? Was there extensive technical research?
- What does the switch from one platform to another will require from the organization? For example, training users who use the platform? How much time the team has to invest in such move?
Version Control
What is Version Control?
- Version control is the sytem of tracking and managing changes to software code.
- It helps software teams to manage changes to source code over time.
- Version control also helps developers move faster and allows software teams to preserve efficiency and agility as the team scales to include more developers.
What is a commit?
- In Git, a commit is a snapshot of your repo at a specific point in time.
- The git commit command will save all staged changes, along with a brief description from the user, in a “commit” to the local repository.
What is a merge?
- Merging is Git's way of putting a forked history back together again. The git merge command lets you take the independent lines of development created by git branch and integrate them into a single branch.
What is a merge conflict?
- A merge conflict is an event that occurs when Git is unable to automatically resolve differences in code between two commits. When all the changes in the code occur on different lines or in different files, Git will successfully merge commits without your help.
What best practices are you familiar with regarding version control?
- Use a descriptive commit message
- Make each commit a logical unit
- Incorporate others' changes frequently
- Share your changes frequently
- Coordinate with your co-workers
- Don't commit generated files
Would you prefer a "configuration->deployment" model or "deployment->configuration"? Why?
Both have advantages and disadvantages. With "configuration->deployment" model for example, where you build one image to be used by multiple deployments, there is less chance of deployments being different from one another, so it has a clear advantage of a consistent environment.
Explain mutable vs. immutable infrastructure
In mutable infrastructure paradigm, changes are applied on top of the existing infrastructure and over time the infrastructure builds up a history of changes. Ansible, Puppet and Chef are examples of tools which follow mutable infrastructure paradigm.
In immutable infrastructure paradigm, every change is actually a new infrastructure. So a change to a server will result in a new server instead of updating it. Terraform is an example of technology which follows the immutable infrastructure paradigm.
Software Distribution
Explain "Software Distribution"
Read this fantastic article on the topic.
From the article: "Thus, software distribution is about the mechanism and the community that takes the burden and decisions to build an assemblage of coherent software that can be shipped."
Why are there multiple software distributions? What differences they can have?
Different distributions can focus on different things like: focus on different environments (server vs. mobile vs. desktop), support specific hardware, specialize in different domains (security, multimedia, ...), etc. Basically, different aspects of the software and what it supports, get different priority in each distribution.
What is a Software Repository?
Wikipedia: "A software repository, or “repo” for short, is a storage location for software packages. Often a table of contents is stored, as well as metadata."
Read more here
What ways are there to distribute software? What are the advantages and disadvantages of each method?
- Source - Maintain build script within version control system so that user can build your app after cloning repository. Advantage: User can quickly checkout different versions of application. Disadvantage: requires build tools installed on users machine.
- Archive - collect all your app files into one archive (e.g. tar) and deliver it to the user. Advantage: User gets everything he needs in one file. Disadvantage: Requires repeating the same procedure when updating, not good if there are a lot of dependencies.
- Package - depends on the OS, you can use your OS package format (e.g. in RHEL/Fefodra it's RPM) to deliver your software with a way to install, uninstall and update it using the standard packager commands. Advantages: Package manager takes care of support for installation, uninstallation, updating and dependency management. Disadvantage: Requires managing package repository.
- Images - Either VM or container images where your package is included with everything it needs in order to run successfully. Advantage: everything is preinstalled, it has high degree of environment isolation. Disadvantage: Requires knowledge of building and optimizing images.
Are you familiar with "The Cathedral and the Bazaar models"? Explain each of the models
- Cathedral - source code released when software is released
- Bazaar - source code is always available publicly (e.g. Linux Kernel)
What is caching? How does it works? Why is it important?
Caching is fast access to frequently used resources which are computationally expensive or IO intensive and do not change often. There can be several layers of cache that can start from CPU caches to distributed cache systems. Common ones are in memory caching and distributed caching.
Caches are typically data structures that contains some data, such as a hashtable or dictionary. However, any data structure can provide caching capabilities, like set, sorted set, sorted dictionary etc. While, caching is used in many applications, they can create subtle bugs if not implemented correctly or used correctly. For example,cache invalidation, expiration or updating is usually quite challenging and hard.
Explain stateless vs. stateful
Stateless applications don't store any data in the host which makes it ideal for horizontal scaling and microservices. Stateful applications depend on the storage to save state and data, typically databases are stateful applications.
What is Reliability? How does it fit DevOps?
Reliability, when used in DevOps context, is the ability of a system to recover from infrastructure failure or disruption. Part of it is also being able to scale based on your organization or team demands.
What "Availability" means? What means are there to track Availability of a service?
Why 100% availability isn't a target? Why most companies or teams set it to be 99%.X?
Describe the workflow of setting up some type of web server (Apache, IIS, Tomcat, ...)
How a web server works?
Explain "Open Source"
Describe me the architecture of service/app/project/... you designed and/or implemented
What types of tests are you familiar with?
Styling, unit, functional, API, integration, smoke, scenario, ...
You should be able to explain those that you mention.
You need to install periodically a package (unless it's already exists) on different operating systems (Ubuntu, RHEL, ...). How would you do it?
There are multiple ways to answer this question (there is no right and wrong here):
- Simple cron job
- Pipeline with configuration management technology (such Puppet, Ansible, Chef, etc.) ...
What is Chaos Engineering?
Wikipedia: "Chaos engineering is the discipline of experimenting on a software system in production in order to build confidence in the system's capability to withstand turbulent and unexpected conditions"
Read about Chaos Engineering here
What is "infrastructure as code"? What implementation of IAC are you familiar with?
IAC (infrastructure as code) is a declerative approach of defining infrastructure or architecture of a system. Some implementations are ARM templates for Azure and Terraform that can work across multiple cloud providers.
What benefits infrastructure-as-code has?
- fully automated process of provisioning, modifying and deleting your infrastructure
- version control for your infrastructure which allows you to quickly rollback to previous versions
- validate infrastructure quality and stability with automated tests and code reviews
- makes infrastructure tasks less repetitive
How do you manage build artifacts?
Build artifacts are usually stored in a repository. They can be used in release pipelines for deployment purposes. Usually there is retention period on the build artifacts.
What Continuous Integration solution are you using/prefer and why?
What deployment strategies are you familiar with or have used?
There are several deployment strategies:
* Rolling
* Blue green deployment
* Canary releases
* Recreate strategy
You joined a team where everyone developing one project and the practice is to run tests locally on their workstation and push it to the repository if the tests passed. What is the problem with the process as it is now and how to improve it?
Explain test-driven development (TDD)
Explain agile software development
What do you think about the following sentence?: "implementing or practicing DevOps leads to more secure software"
Do you know what is a "post-mortem meeting"? What is your opinion on that?
What is a configuration drift? What problems is it causing?
Configuration drift happens when in an environment of servers with the exact same configuration and software, a certain server or servers are being applied with updates or configuration which other servers don't get and over time these servers become slightly different than all others.
This situation might lead to bugs which hard to identify and reproduce.
How to deal with a configuration drift?
Configuration drift can be avoided with desired state configuration (DSC) implementation. Desired state configuration can be a declarative file that defined how a system should be. There are tools to enforce desired state such a terraform or azure dsc. There are incramental or complete strategies.
Explain Declarative and Procedural styles. The technologies you are familiar with (or using) are using procedural or declarative style?
Declarative - You write code that specifies the desired end state Procedural - You describe the steps to get to the desired end state
Declarative Tools - Terraform, Puppet, CloudFormation Procedural Tools - Ansible, Chef
To better emphasize the difference, consider creating two virtual instances/servers. In declarative style, you would specify two servers and the tool will figure out how to reach that state. In procedural style, you need to specify the steps to reach the end state of two instances/servers - for example, create a loop and in each iteration of the loop create one instance (running the loop twice of course).
Do you have experience with testing cross-projects changes? (aka cross-dependency)
Note: cross-dependency is when you have two or more changes to separate projects and you would like to test them in mutual build instead of testing each change separately.
Have you contributed to an open source project? Tell me about this experience
What is Distributed Tracing?
What is GitOps?
GitLab: "GitOps is an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD tooling, and applies them to infrastructure automation".
Read more here
SRE
What are the differences between SRE and DevOps?
Google: "One could view DevOps as a generalization of several core SRE principles to a wider range of organizations, management structures, and personnel."
Read more about it here
What SRE team is responsible for?
Google: "the SRE team is responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of their services"
Read more about it here
What is an error budget?
Atlassian: "An error budget is the maximum amount of time that a technical system can fail without contractual consequences."
Read more about it here
What do you think about the following statement: "100% is the only right availability target for a system"
Wrong. No system can guarantee 100% availability as no system is safe from experiencing zero downtime. Many systems and services will fall somewhere between 99% and 100% uptime (or at least this is how most systems and services should be).
What are MTTF (mean time to failure) and MTTR (mean time to repair)? What these metrics help us to evaluate?
* MTTF (mean time to failure) other known as uptime, can be defined as how long the system runs before if fails.
* MTTR (mean time to recover) on the other hand, is the amount of time it takes to repair a broken system.
* MTBF (mean time between failures) is the amount of time between failures of the system.
What is the role of monitoring in SRE?
Google: "Monitoring is one of the primary means by which service owners keep track of a system’s health and availability"
Read more about it here
CI/CD
CI/CD Exercises
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
Set up a CI pipeline | CI | Exercise | ||
Deploy to Kubernetes | Deployment | Exercise | Solution | |
Jenkins - Remove Jobs | Jenkins Scripts | Exercise | Solution | |
Jenkins - Remove Builds | Jenkins Sripts | Exercise | Solution |
CI/CD Self Assessment
What is Continuous Integration?
A development practice where developers integrate code into a shared repository frequently. It can range from a couple of changes every day or a week to a couple of changes in one hour in larger scales.
Each piece of code (change/patch) is verified, to make the change is safe to merge. Today, it's a common practice to test the change using an automated build that makes sure the code can integrated. It can be one build which runs several tests in different levels (unit, functional, etc.) or several separate builds that all or some has to pass in order for the change to be merged into the repository.
What is Continuous Deployment?
A development strategy used by developers to release software automatically into production where any code commit must pass through an automated testing phase. Only when this is successful is the release considered production worthy. This eliminates any human interaction and should be implemented only after production-ready pipelines have been set with real-time monitoring and reporting of deployed assets. If any issues are detected in production it should be easy to rollback to previous working state.
For more info please read here
Can you describe an example of a CI (and/or CD) process starting the moment a developer submitted a change/PR to a repository?
There are many answers for such a question, as CI processes vary, depending on the technologies used and the type of the project to where the change was submitted. Such processes can include one or more of the following stages:
- Compile
- Build
- Install
- Configure
- Update
- Test
An example of one possible answer:
A developer submitted a pull request to a project. The PR (pull request) triggered two jobs (or one combined job). One job for running lint test on the change and the second job for building a package which includes the submitted change, and running multiple api/scenario tests using that package. Once all tests passed and the change was approved by a maintainer/core, it's merged/pushed to the repository. If some of the tests failed, the change will not be allowed to merged/pushed to the repository.
A complete different answer or CI process, can describe how a developer pushes code to a repository, a workflow then triggered to build a container image and push it to the registry. Once in the registry, the k8s cluster is applied with the new changes.
What is Continuous Delivery?
A development strategy used to frequently deliver code to QA and Ops for testing. This entails having a staging area that has production like features where changes can only be accepted for production after a manual review. Because of this human entanglement there is usually a time lag between release and review making it slower and error prone as compared to continous deployment.
For more info please read here
What is difference between Continuous Delivery and Continuous Deployment?
Both encapsulate the same process of deploying the changes which were compiled and/or tested in the CI pipelines.
The difference between the two is that Continuous Delivery isn't fully automated process as opposed to Continuous Deployment where every change that is tested in the process is eventually deployed to production. In continuous delivery someone is either approving the deployment process or the deployment process is based on constraints and conditions (like time constraint of deploying every week/month/...)
What CI/CD best practices are you familiar with? Or what do you consider as CI/CD best practice?
- Commit and test often.
- Testing/Staging environment should be a clone of production environment.
- Clean up your environments (e.g. your CI/CD pipelines may create a lot of resources. They should also take care of cleaning up everything they create)
- The CI/CD pipelines should provide the same results when executed locally or remotely
- Treat CI/CD as another application in your organization. Not as a glue code.
- On demand environments instead of pre-allocated resources for CI/CD purposes
- Stages/Steps/Tasks of pipelines should be shared between applications or microservices (don't re-invent common tasks like "cloning a project")
You are given a pipeline and a pool with 3 workers: virtual machine, baremetal and a container. How will you decide on which one of them to run the pipeline?
Where do you store CI/CD pipelines? Why?
There are multiple approaches as to where to store the CI/CD pipeline definitions:
- App Repository - store them in the same repository of the application they are building or testing (perhaps the most popular one)
- Central Repository - store all organization's/project's CI/CD pipelines in one separate repository (perhaps the best approach when multiple teams test the same set of projects and they end up having many pipelines)
- CI repo for every app repo - you separate CI related code from app code but you don't put everything in one place (perhaps the worst option due to the maintenance)
- The platform where the CI/CD pipelines are being executed (e.g. Kubernetes Cluster in case of Tekton/OpenShift Pipelines).
How do you perform plan capacity for your CI/CD resources? (e.g. servers, storage, etc.)
How would you structure/implement CD for an application which depends on several other applications?
How do you measure your CI/CD quality? Are there any metrics or KPIs you are using for measuring the quality?
CI/CD - Jenkins
What is Jenkins? What have you used it for?
Jenkins is an open source automation tool written in Java with plugins built for Continuous Integration purpose. Jenkins is used to build and test your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.
Jenkins integrates development life-cycle processes of all kinds, including build, document, test, package, stage, deploy, static analysis and much more.
What are the advantages of Jenkins over its competitors? Can you compare it to one of the following systems?
- Travis
- Bamboo
- Teamcity
- CircleCI
What are the limitations or disadvantages of Jenkins?
This might be considered to be an opinionated answer:
- Old fashioned dashboards with not many options to customize it
- Containers readiness (this has improved with Jenkins X)
- By itself, it doesn't have many features. On the other hand, there many plugins created by the community to expand its abilities
- Managing Jenkins and its piplines as a code can be one hell of a nightmare
Explain the following:
- Job
- Build
- Plugin
- Node or Worker
- Executor
What plugins have you used in Jenkins?
Have you used Jenkins for CI or CD processes? Can you describe them?
What type of jobs are there? Which types have you used?
How did you report build results to users? What ways are there to report the results?
You can report via:
- Emails
- Messaging apps
- Dashboards
Each has its own disadvantages and advantages. Emails for example, if sent too often, can be eventually disregarded or ignored.
You need to run unit tests every time a change submitted to a given project. Describe in details how your pipeline would look like and what will be executed in each stage
The pipelines will have multiple stages:
- Clone the project
- Install test dependencies (for example, if I need tox package to run the tests, I will install it in this stage)
- Run unit tests
- (Optional) report results (For example an email to the users)
- Archive the relevant logs/files
How to secure Jenkins?
Jenkins documentation provides some basic intro for securing your Jenkins server.
Describe how do you add new nodes (agents) to Jenkins
You can describe the UI way to add new nodes but better to explain how to do in a way that scales like a script or using dynamic source for nodes like one of the existing clouds.
How to acquire multiple nodes for one specific build?
Whenever a build fails, you would like to notify the team owning the job regarding the failure and provide failure reason. How would you do that?
There are four teams in your organization. How to prioritize the builds of each team? So the jobs of team x will always run before team y for example
If you are managing a dozen of jobs, you can probably use the Jenkins UI. But how do you manage the creation and deletion of hundreds of jobs every week/month?
What are some of Jenkins limitations?
- Testing cross-dependencies (changes from multiple projects together)
- Starting builds from any stage (although Cloudbees implemented something called checkpoints)
What is the different between a scripted pipeline to declarative pipeline? Which type are you using?
How would you implement an option of a starting a build from a certain stage and not from the beginning?
Do you have experience with developing a Jenkins plugin? Can you describe this experience?
Have you written Jenkins scripts? If yes, what for and how they work?
CI/CD - GitHub Actions
What is a Workflow in GitHub Actions?
A YAML file that defines the automation actions and instructions to execute upon a specific event.
The file is placed in the repository itself.
A Workflow can be anything - running tests, compiling code, building packages, ...
What is a Runner in GitHub Actions?
A workflow has to be executed somewhere. The environment where the workflow is executed is called Runner.
A Runner can be an on-premise host or GitHub hoste
What is a Job in GitHub Actions?
A job is a series of steps which are executed on the same runner/environment.
A workflow must include at least one job.
What is an Action in GitHub Actions?
An action is the smallest unit in a workflow. It includes the commands to execute as part of the job.
In GitHub Actions workflow, what the 'on' attribute/directive is used for?
Specify upon which events the workflow will be triggered.
For example, you might configure the workflow to trigger every time a changed is pushed to the repository.
True or False? In Github Actions, jobs are executed in parallel by deafult
True
How to create dependencies between jobs so one job runs after another?
Using the "needs" attribute/directive.
jobs:
job1:
job2:
needs: job1
In the above example, job1 must complete successfully before job2 runs
How to add a Workflow to a repository?
CLI:
- Create the directory
.github/workflows
in the repository - Add a YAML file
UI:
- In the repository page, click on "Actions"
- Choose workflow and click on "Set up this workflow"
Cloud
What is Cloud Computing? What is a Cloud Provider?
Cloud computing refers to the delivery of on-demand computing services over the internet on a pay-as-you-go basis.
In simple words, Cloud computing is a service that lets you use any computing service such as a server, storage, networking, databases, and intelligence, right through your browser without owning anything. You can do anything you can think of unless it doesn’t require you to stay close to your hardware.
Cloud service providers are companies that establish public clouds, manage private clouds, or offer on-demand cloud computing components (also known as cloud computing services) like Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service(SaaS). Cloud services can reduce business process costs when compared to on-premise IT.
What are the advantages of cloud computing? Mention at least 3 advantages
- Pay as you go: you are paying only for what you are using. No upfront payments and payment stops when resources are no longer used.
- Scalable: resources are scaled down or up based on demand
- High availability: resources and applications provide seamless experience, even when some services are down
- Disaster recovery
True or False? Cloud computing is a consumption-based model (users only pay for for resources they use)
True
What types of Cloud Computing services are there?
IAAS - Infrastructure as a Service PAAS - Platform as a Service SAAS - Software as a Service
Explain each of the following and give an example:
- IAAS
- PAAS
- SAAS
What types of clouds (or cloud deployments) are there?
- Public - Cloud services sharing computing resources among multiple customers
- Private - Cloud services having computing resources limited to specific customer or organization, managed by third party or organizations itself
- Hybrid - Combination of public and private clouds
What are the differences between Cloud Providers and On-Premise solution?
In cloud providers, someone else owns and manages the hardware, hire the relevant infrastructure teams and pays for real-estate (for both hardware and people). You can focus on your business.
In On-Premise solution, it's quite the opposite. You need to take care of hardware, infrastructure teams and pay for everything which can be quite expensive. On the other hand it's tailored to your needs.
What is Serverless Computing?
The main idea behind serverless computing is that you don't need to manage the creation and configuration of server. All you need to focus on is splitting your app into multiple functions which will be triggered by some actions.
It's important to note that:
- Serverless Computing is still using servers. So saying there are no servers in serverless computing is completely wrong
- Serverless Computing allows you to have a different paying model. You basically pay only when your functions are running and not when the VM or containers are running as in other payment models
Can we replace any type of computing on servers with serverless?
Is there a difference between managed service to SaaS or is it the same thing?
What is auto scaling?
AWS definition: "AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost"
Read more about auto scaling here
True or False? Auto Scaling is about adding resources (such as instances) and not about removing resource
False. Auto scaling adjusts capacity and this can mean removing some resources based on usage and performances.
Cloud - Security
How to secure instances in the cloud?
- Instance should have minimal permissions needed. You don't want an instance-level incident to become an account-level incident
- Instances should be accessed through load balancers or bastion hosts. In other words, they should be off the internet (in a private subnet behind a NAT).
- Using latest OS images with your instances (or at least apply latest patches)
AWS
AWS Exercises
AWS - IAM
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
Create a User | IAM | Exercise | Solution | |
Password Policy | IAM | Exercise | Solution |
AWS - Lambda
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
Hello Function | Lambda | Exercise | Solution | |
URL Function | Lambda | Exercise | Solution |
AWS Self Assessment
AWS - Global Infrastructure
Explain the following
- Availability zone
- Region
- Edge location
AWS regions are data centers hosted across different geographical locations worldwide.
Within each region, there are multiple isolated locations known as Availability Zones. Each availability zone is one or more data-centers with redundant network and connectivity and power supply. Multiple availability zones ensure high availability in case one of them goes down.
Edge locations are basically content delivery network which caches data and insures lower latency and faster delivery to the users in any location. They are located in major cities in the world.
True or False? Each AWS region is designed to be completely isolated from the other AWS regions
True.
True or False? Each region has a minimum number of 1 availability zones and the maximum is 4
False. The minimum is 2 while the maximum is 6.
What considerations to take when choosing an AWS region for running a new application?
- Services Availability: not all service (and all their features) are available in every region
- Reduced latency: deploy application in a region that is close to customers
- Compliance: some countries have more strict rules and requirements such as making sure the data stays within the borders of the country or the region. In that case, only specific region can be used for running the application
- Pricing: the pricing might not be consistent across regions so, the price for the same service in different regions might be different.
AWS - IAM
What is IAM? What are some of its features?
Full explanation is here In short: it's used for managing users, groups, access policies & roles
True or False? IAM configuration is defined globally and not per region
True
True or False? When creating an AWS account, root account is created by default. This is the recommended account to use and share in your organization
False. Instead of using the root account, you should be creating users and use them.
True or False? Groups in AWS IAM, can contain only users and not other groups
True
True or False? Users in AWS IAM, can belong only to a single group
False. Users can belong to multiple groups.
What best practices are there regarding IAM in AWS?
- Set up MFA
- Delete root account access keys
- Create IAM users instead of using root for daily management
- Apply "least privilege principle": give users only the permissions they need, nothing more than that.
What permissions does a new user have?
Only a login access.
What are Roles?
A way for allowing a service of AWS to use another service of AWS. You assign roles to AWS resources. For example, you can make use of a role which allows EC2 service to acesses s3 buckets (read and write).
What are Policies?
Policies documents used to give permissions as to what a user, group or role are able to do. Their format is JSON.
A user is unable to access an s3 bucket. What might be the problem?
There can be several reasons for that. One of them is lack of policy. To solve that, the admin has to attach the user with a policy what allows him to access the s3 bucket.
What should you use to:
- Grant access between two services/resources?
- Grant user access to resources/services?
- Role
- Policy
What statements are consist of in AWS IAM policies?
- Sid: identifier of the statement (optional)
- Effect: allow or deny access
- Action: list of actions (to deny or allow)
- Resource: a list of resources to which the actions are applied
- Principal: role or account or user to which to apply the policy
- Condition: conditions to determine when the policy is applied (optional)
Explain the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect:": "Allow",
"Action": "*",
"Resources": "*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect:": "Allow",
"Action": "*",
"Resources": "*"
}
]
}
This policy permits to perform any action on any resource. It happens to be the "AdministratorAccess" policy.
AWS - Compute
What is EC2?
"a web service that provides secure, resizable compute capacity in the cloud". Read more here
True or False? EC2 is a regional service
True. As opposed to IAM for example, which is a global service, EC2 is a regional service.
What is AMI?
Amazon Machine Images is "An Amazon Machine Image (AMI) provides the information required to launch an instance". Read more here
What are the different source for AMIs?
- Personal AMIs - AMIs you create
- AWS Marketplace for AMIs - Paid AMIs usually with bundled with licensed software
- Community AMIs - Free
What is instance type?
"the instance type that you specify determines the hardware of the host computer used for your instance" Read more about instance types here
True or False? The following are instance types available for a user in AWS:
- Compute optimizied
- Network optimizied
- Web optimized
False. From the above list only compute optimized is available.
What is EBS?
"provides block level storage volumes for use with EC2 instances. EBS volumes behave like raw, unformatted block devices." More on EBS here
What EC2 pricing models are there?
On Demand - pay a fixed rate by the hour/second with no commitment. You can provision and terminate it at any given time. Reserved - you get capacity reservation, basically purchase an instance for a fixed time of period. The longer, the cheaper. Spot - Enables you to bid whatever price you want for instances or pay the spot price. Dedicated Hosts - physical EC2 server dedicated for your use.
What are Security Groups?
"A security group acts as a virtual firewall that controls the traffic for one or more instances" More on this subject here
How to migrate an instance to another availability zone?
What can you attach to an EC2 instance in order to store data?
EBS
What EC2 RI types are there?
Standard RI - most significant discount + suited for steady-state usage Convertible RI - discount + change attribute of RI + suited for steady-state usage Scheduled RI - launch within time windows you reserve
Learn more about EC2 RI here
You would like to invoke a function every time you enter a URL in the browser. Which service would you use for that?
AWS Lambda
AWS - Lambda
Explain what is AWS Lambda
AWS definition: "AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume."
Read more on it here
True or False? In AWS Lambda, you are charged as long as a function exists, regardless of whether it's running or not
False. Charges are being made when the code is executed.
Which of the following set of languages Lambda supports?
- R, Swift, Rust, Kotlin
- Python, Ruby, Go
- Python, Ruby, PHP
- Python, Ruby, Go
True or False? Basic lambda permissions allow you only to upload logs to Amazon CloudWatch Logs
True
AWS Containers
What is Amazon ECS?
Amazon definition: "Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service. Customers such as Duolingo, Samsung, GE, and Cook Pad use ECS to run their most sensitive and mission critical applications because of its security, reliability, and scalability."
Learn more here
What is Amazon ECR?
Amazon definition: "Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images."
Learn more here
What is AWS Fargate?
Amazon definition: "AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS)."
Learn more here
AWS Storage
Explain what is AWS S3?
S3 stands for 3 S, Simple Storage Service. S3 is a object storage service which is fast, scalable and durable. S3 enables customers to upload, download or store any file or object that is up to 5 TB in size.
More on S3 here
What is a bucket?
An S3 bucket is a resource which is similar to folders in a file system and allows storing objects, which consist of data.
True or False? A bucket name must be globally unique
True
Explain folders and objects in regards to buckets
- Folder - any sub folder in an s3 bucket
- Object - The files which are stored in a bucket
Explain the following:
- Object Lifecycles
- Object Sharing
- Object Versioning
- Object Lifecycles - Transfer objects between storage classes based on defined rules of time periods
- Object Sharing - Share objects via a URL link
- Object Versioning - Manage multiple versions of an object
Explain Object Durability and Object Availability
Object Durability: The percent over a one-year time period that a file will not be lost Object Availability: The percent over a one-year time period that a file will be accessible
What is a storage class? What storage classes are there?
Each object has a storage class assigned to, affecting its availability and durability. This also has effect on costs. Storage classes offered today:
-
Standard:
- Used for general, all-purpose storage (mostly storage that needs to be accessed frequently)
- The most expensive storage class
- 11x9% durability
- 2x9% availability
- Default storage class
-
Standard-IA (Infrequent Access)
- Long lived, infrequently accessed data but must be available the moment it's being accessed
- 11x9% durability
- 99.90% availability
-
One Zone-IA (Infrequent Access):
- Long-lived, infrequently accessed, non-critical data
- Less expensive than Standard and Standard-IA storage classes
- 2x9% durability
- 99.50% availability
-
Intelligent-Tiering:
- Long-lived data with changing or unknown access patterns. Basically, In this class the data automatically moves to the class most suitable for you based on usage patterns
- Price depends on the used class
- 11x9% durability
- 99.90% availability
-
Glacier: Archive data with retrieval time ranging from minutes to hours
-
Glacier Deep Archive: Archive data that rarely, if ever, needs to be accessed with retrieval times in hours
-
Both Glacier and Glacier Deep Archive are:
- The most cheap storage classes
- have 9x9% durability
More on storage classes here
A customer would like to move data which is rarely accessed from standard storage class to the most cheapest class there is. Which storage class should be used?
- One Zone-IA
- Glacier Deep Archive
- Intelligent-Tiering
Glacier Deep Archive
What Glacier retrieval options are available for the user?
Expedited, Standard and Bulk
True or False? Each AWS account can store up to 500 PetaByte of data. Any additional storage will cost double
False. Unlimited capacity.
Explain what is Storage Gateway
"AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage". More on Storage Gateway here
Explain the following Storage Gateway deployments types
- File Gateway
- Volume Gateway
- Tape Gateway
Explained in detail here
What is the difference between stored volumes and cached volumes?
Stored Volumes - Data is located at customer's data center and periodically backed up to AWS Cached Volumes - Data is stored in AWS cloud and cached at customer's data center for quick access
What is "Amazon S3 Transfer Acceleration"?
AWS definition: "Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket"
Learn more here
Explain data consistency
S3 Data Consistency provides strong read-after-write consistency for PUT and DELETE requests of objects in the S3 bucket in all AWS Regions. S3 always return latest file version.
Can you host dynamic websites on S3? What about static websites?
No. S3 support only statis hosts. On a static website, individual webpages include static content. They might also contain client-side scripts. By contrast, a dynamic website relies on server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET. Amazon S3 does not support server-side scripting.
What security measures have you taken in context of S3?
* Enable versioning. * Don't make bucket public. * Enable encryption if it's disabled.
What storage options are there for EC2 Instances?
What is Amazon EFS?
Amazon definition: "Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources."
Learn more here
What is AWS Snowmobile?
"AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS."
Learn more here
AWS Disaster Recovery
In regards to disaster recovery, what is RTO and RPO?
RTO - The maximum acceptable length of time that your application can be offline.
RPO - The maximum acceptable length of time during which data might be lost from your application due to an incident.
What types of disaster recovery techniques AWS supports?
- The Cold Method - Periodically backups and sending the backups off-site
- Pilot Light - Data is mirrored to an environment which is always running
- Warm Standby - Running scaled down version of production environment
- Multi-site - Duplicated environment that is always running
Which disaster recovery option has the highest downtime and which has the lowest?
Lowest - Multi-site Highest - The cold method
AWS CloudFront
Explain what is CloudFront
AWS definition: "Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment."
More on CloudFront here
Explain the following
- Origin
- Edge location
- Distribution
What delivery methods available for the user with CDN?
True or False?. Objects are cached for the life of TTL
True
What is AWS Snowball?
A transport solution which was designed for transferring large amounts of data (petabyte-scale) into and out the AWS cloud.
AWS ELB
What is ELB (Elastic Load Balancing)?
AWS definition: "Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions."
More on ELB here
What types of load balancers are supported in EC2 and what are they used for?
- Application LB - layer 7 traffic
- Network LB - ultra-high performances or static IP address (layer 4)
- Classic LB - low costs, good for test or dev environments (retired by August 15, 2022)
- Gateway LB - transparent network gateway and and distributes traffic such as firewalls, intrusion detection and prevention systems, and deep packet inspection systems. (layer 3)
AWS Security
What is the shared responsibility model? What AWS is responsible for and what the user is responsible for based on the shared responsibility model?
The shared responsibility model defines what the customer is responsible for and what AWS is responsible for.
More on the shared responsibility model here
True or False? Based on the shared responsibility model, Amazon is responsible for physical CPUs and security groups on instances
False. It is responsible for Hardware in its sites but not for security groups which created and managed by the users.
Explain "Shared Controls" in regards to the shared responsibility model
AWS definition: "apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services"
Learn more about it here
What is the AWS compliance program?
How to secure instances in AWS?
- Instance IAM roles should have minimal permissions needed. You don't want an instance-level incident to become an account-level incident
- Use "AWS System Manager Session Manager" for SSH
- Using latest OS images with your instances
What is AWS Artifact?
AWS definition: "AWS Artifact is your go-to, central resource for compliance-related information that matters to you. It provides on-demand access to AWS’ security and compliance reports and select online agreements."
Read more about it here
What is AWS Inspector?
AWS definition: "Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.""
Learn more here
What is AWS Guarduty?
AWS definition: "Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your Amazon Web Services accounts, workloads, and data stored in Amazon S3"
Monitor VPC Flow lows, DNS logs, CloudTrail S3 events and CloudTrail Mgmt events.
What is AWS Shield?
AWS definition: "AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS."
What is AWS WAF? Give an example of how it can used and describe what resources or services you can use it with
What AWS VPN is used for?
What is the difference between Site-to-Site VPN and Client VPN?
What is AWS CloudHSM?
Amazon definition: "AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud."
Learn more here
True or False? AWS Inspector can perform both network and host assessments
True
What is AWS Key Management Service (KMS)?
AWS definition: "KMS makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications." More on KMS here
What is AWS Acceptable Use Policy?
It describes prohibited uses of the web services offered by AWS. More on AWS Acceptable Use Policy here
True or False? A user is not allowed to perform penetration testing on any of the AWS services
False. On some services, like EC2, CloudFront and RDS, penetration testing is allowed.
True or False? DDoS attack is an example of allowed penetration testing activity
False.
True or False? AWS Access Key is a type of MFA device used for AWS resources protection
False. Security key is an example of an MFA device.
What is Amazon Cognito?
Amazon definition: "Amazon Cognito handles user authentication and authorization for your web and mobile apps."
Learn more here
What is AWS ACM?
Amazon definition: "AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources."
Learn more here
AWS Databases
What is AWS RDS?
What is AWS DynamoDB?
Explain "Point-in-Time Recovery" feature in DynamoDB
Amazon definition: "You can create on-demand backups of your Amazon DynamoDB tables, or you can enable continuous backups using point-in-time recovery. For more information about on-demand backups, see On-Demand Backup and Restore for DynamoDB."
Learn more here
Explain "Global Tables" in DynamoDB
Amazon definition: "A global table is a collection of one or more replica tables, all owned by a single AWS account."
Learn more here
What is DynamoDB Accelerator?
Amazon definition: "Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds..."
Learn more here
What is AWS Redshift and how is it different than RDS?
cloud data warehouse
What do you if you suspect AWS Redshift performs slowly?
- You can confirm your suspicion by going to AWS Redshift console and see running queries graph. This should tell you if there are any long-running queries.
- If confirmed, you can query for running queries and cancel the irrelevant queries
- Check for connection leaks (query for running connections and include their IP)
- Check for table locks and kill irrelevant locking sessions
What is AWS ElastiCache? For what cases is it used?
Amazon Elasticache is a fully managed Redis or Memcached in-memory data store. It's great for use cases like two-tier web applications where the most frequently accesses data is stored in ElastiCache so response time is optimal.
What is Amazon Aurora
A MySQL & Postgresql based relational database. Also, the default database proposed for the user when using RDS for creating a database. Great for use cases like two-tier web applications that has a MySQL or Postgresql database layer and you need automated backups for your application.
What is Amazon DocumentDB?
Amazon definition: "Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data."
Learn more here
What "AWS Database Migration Service" is used for?
What type of storage is used by Amazon RDS?
EBS
Explain Amazon RDS Read Replicas
AWS definition: "Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads." Read more about here
AWS Networking
What is VPC?
"A logically isolated section of the AWS cloud where you can launch AWS resources in a virtual network that you define" Read more about it here.
True or False? VPC spans multiple regions
False
True or False? Subnets belong to the same VPC, can be in different availability zones
True. Just to clarify, a single subnet resides entirely in one AZ.
What is an Internet Gateway?
"component that allows communication between instances in your VPC and the internet" (AWS docs). Read more about it here
True or False? NACL allow or deny traffic on the subnet level
True
What is VPC peering?
docs.aws: "A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses."
True or False? Multiple Internet Gateways can be attached to one VPC
False. Only one internet gateway can be attached to a single VPC.
What is an Elastic IP address?
An Elastic IP address is a reserved public IP address that you can assign to any EC2 instance in a particular region, until you choose to release it. When you associate an Elastic IP address with an EC2 instance, it replaces the default public IP address. If an external hostname was allocated to the instance from your launch settings, it will also replace this hostname; otherwise, it will create one for the instance. The Elastic IP address remains in place through events that normally cause the address to change, such as stopping or restarting the instance.
True or False? Route Tables used to allow or deny traffic from the internet to AWS instances
False.
Explain Security Groups and Network ACLs
- NACL - security layer on the subnet level.
- Security Group - security layer on the instance level.
What is AWS Direct Connect?
Allows you to connect your corporate network to AWS network.
AWS - Identify the service or tool
What would you use for automating code/software deployments?
AWS CodeDeploy
What would you use for easily creating similar AWS environments/resources for different customers?
CloudFormation
Using which service, can you add user sign-up, sign-in and access control to mobile and web apps?
Cognito
Which service would you use for building a website or web application?
Lightsail
Which tool would you use for choosing between Reserved instances or On-Demand instances?
Cost Explorer
What would you use to check how many unassociated Elastic IP address you have?
Trusted Advisor
Which service allows you to transfer large amounts (Petabytes) of data in and out of the AWS cloud?
AWS Snowball
Which service would you use if you need a data warehouse?
AWS RedShift
Which service provides a virtual network dedicated to your AWS account?
VPC
What you would use for having automated backups for an application that has MySQL database layer?
Amazon Aurora
What would you use to migrate on-premise database to AWS?
AWS Database Migration Service (DMS)
What would you use to check why certain EC2 instances were terminated?
AWS CloudTrail
What would you use for SQL database?
AWS RDS
What would you use for NoSQL database?
AWS DynamoDB
What would you use for adding image and video analysis to your application?
AWS Rekognition
Which service would you use for debugging and improving performances issues with your applications?
AWS X-Ray
Which service is used for sending notifications?
SNS
What would you use for running SQL queries interactively on S3?
AWS Athena
What would you use for preparing and combining data for analytics or ML?
AWS Glue
Which service would you use for monitoring malicious activity and unauthorized behavior in regards to AWS accounts and workloads?
Amazon GuardDuty
Which service would you use for centrally manage billing, control access, compliance, and security across multiple AWS accounts?
AWS Organizations
Which service would you use for web application protection?
AWS WAF
You would like to monitor some of your resources in the different services. Which service would you use for that?
CloudWatch
Which service would you use for performing security assessment?
AWS Inspector
Which service would you use for creating DNS record?
Route 53
What would you use if you need a fully managed document database?
Amazon DocumentDB
Which service would you use to add access control (or sign-up, sign-in forms) to your web/mobile apps?
AWS Cognito
Which service would you use if you need messaging queue?
Simple Queue Service (SQS)
Which service would you use if you need managed DDOS protection?
AWS Shield
Which service would you use if you need store frequently used data for low latency access?
ElastiCache
What would you use to transfer files over long distances between a client and an S3 bucket?
Amazon S3 Transfer Acceleration
Which service would you use for distributing incoming requests across multiple?
Route 53
Which services are involved in getting a custom string (based on the input) when inserting a URL in the browser?
Lambda - to define a function that gets an input and returns a certain string
API Gateway - to define the URL trigger (= when you insert the URL, the function is invoked).
Which service would you use for data or events streaming?
Kinesis
AWS DNS
What is Route 53?
"Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service..." Some of Route 53 features:
- Register domain
- DNS service - domain name translations
- Health checks - verify your app is available
More on Route 53 here
AWS Monitoring & Logging
What is AWS CloudWatch?
AWS definition: "Amazon CloudWatch is a monitoring and observability service..."
More on CloudWatch here
What is AWS CloudTrail?
AWS definition: "AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account."
Read more on CloudTrail here
What is Simply Notification Service?
AWS definition: "a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications."
Read more about it here
Explain the following in regards to SNS:
- Topics
- Subscribers
- Publishers
- Topics - used for grouping multiple endpoints
- Subscribers - the endpoints where topics send messages to
- Publishers - the provider of the message (event, person, ...)
AWS Billing & Support
What is AWS Organizations?
AWS definition: "AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS." More on Organizations here
What are Service Control Policies and to what service they belong?
AWS organizations service and the definition by Amazon: "SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines."
Learn more here
Explain AWS pricing model
It mainly works on "pay-as-you-go" meaning you pay only for what are using and when you are using it. In s3 you pay for 1. How much data you are storing 2. Making requests (PUT, POST, ...) In EC2 it's based on the purchasing option (on-demand, spot, ...), instance type, AMI type and the region used.
More on AWS pricing model here
How one should estimate AWS costs when for example comparing to on-premise solutions?
- TCO calculator
- AWS simple calculator
- Cost Explorer
What basic support in AWS includes?
- 24x7 customer service
- Trusted Advisor
- AWS personal Health Dashoard
How are EC2 instances billed?
What AWS Pricing Calculator is used for?
What is Amazon Connect?
Amazon definition: "Amazon Connect is an easy to use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost."
Learn more here
What are "APN Consulting Partners"?
Amazon definition: "APN Consulting Partners are professional services firms that help customers of all types and sizes design, architect, build, migrate, and manage their workloads and applications on AWS, accelerating their journey to the cloud."
Learn more here
Which of the following are AWS accounts types (and are sorted by order)?
- Basic, Developer, Business, Enterprise
- Newbie, Intermediate, Pro, Enterprise
- Developer, Basic, Business, Enterprise
- Beginner, Pro, Intermediate Enterprise
- Basic, Developer, Business, Enterprise
True or False? Region is a factor when it comes to EC2 costs/pricing
True. You pay differently based on the chosen region.
What is "AWS Infrastructure Event Management"?
AWS Definition: "AWS Infrastructure Event Management is a structured program available to Enterprise Support customers (and Business Support customers for an additional fee) that helps you plan for large-scale events such as product or application launches, infrastructure migrations, and marketing events."
AWS Automation
What is AWS CodeDeploy?
Amazon definition: "AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers."
Learn more here
Explain what is CloudFormation
AWS - Misc
Which AWS service you have experience with that you think is not very common?
What is AWS CloudSearch?
What is AWS Lightsail?
AWS definition: "Lightsail is an easy-to-use cloud platform that offers you everything needed to build an application or website, plus a cost-effective, monthly plan."
What is AWS Rekognition?
AWS definition: "Amazon Rekognition makes it easy to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise to use."
Learn more here
What AWS Resource Groups used for?
Amazon definition: "You can use resource groups to organize your AWS resources. Resource groups make it easier to manage and automate tasks on large numbers of resources at one time. "
Learn more here
What is AWS Global Accelerator?
Amazon definition: "AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users..."
Learn more here
What is AWS Config?
Amazon definition: "AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources."
Learn more here
What is AWS X-Ray?
AWS definition: "AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture." Learn more here
What is AWS OpsWorks?
Amazon definition: "AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet."
Learn more about it here
What is AWS Athena?
"Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL."
Learn more about AWS Athena here
What is Amazon Cloud Directory?
Amazon definition: "Amazon Cloud Directory is a highly available multi-tenant directory-based store in AWS. These directories scale automatically to hundreds of millions of objects as needed for applications."
Learn more here
What is AWS Elastic Beanstalk?
AWS definition: "AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services...You can simply upload your code and Elastic Beanstalk automatically handles the deployment"
Learn more about it here
What is AWS SWF?
Amazon definition: "Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud."
Learn more on Amazon Simple Workflow Service here
What is AWS EMR?
AWS definition: "big data platform for processing vast amounts of data using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto."
Learn more here
What is AWS Quick Starts?
AWS definition: "Quick Starts are built by AWS solutions architects and partners to help you deploy popular technologies on AWS, based on AWS best practices for security and high availability."
Read more here
What is the Trusted Advisor?
What is AWS Service Catalog?
Amazon definition: "AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS."
Learn more here
What is AWS CAF?
Amazon definition: "AWS Professional Services created the AWS Cloud Adoption Framework (AWS CAF) to help organizations design and travel an accelerated path to successful cloud adoption. "
Learn more here
What is AWS Cloud9?
AWS definition: "AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser"
What is AWS Application Discovery Service?
Amazon definition: "AWS Application Discovery Service helps enterprise customers plan migration projects by gathering information about their on-premises data centers."
Learn more here
What is the AWS well-architected framework and what pillars it's based on?
AWS definition: "The Well-Architected Framework has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization"
Learn more here
What AWS services are serverless (or have the option to be serverless)?
AWS Lambda AWS Athena
What is Simple Queue Service (SQS)?
AWS definition: "Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications".
Learn more about it here
Network
What do you need in order to communicate?
- A common language (for the two ends to understand)
- A way to address who do you want to communicate with
- A Connection (so the content of of the communication can reach the recipients)
What is TCP/IP?
A set of protocols that define how two or more devices can communicate with each other. To learn more about TCP/IP, read here
What is Ethernet?
Ethernet simply refers to the most common type of Local Area Network (LAN) used today. A LAN—in contrast to a WAN (Wide Area Network), which spans a larger geographical area—is a connected network of computers in a small area, like your office, college campus, or even home.
What is a MAC address? What is it used for?
A MAC address is a unique identification number or code used to identify individual devices on the network.
Packets that are sent on the ethernet are always coming from a MAC address and sent to a MAC address. If a network adapter is receiving a packet, it is comparing the packet’s destination MAC address to the adapter’s own MAC address.
When is this MAC address used?: ff:ff:ff:ff:ff:ff
When a device sends a packet to the broadcast MAC address (FF:FF:FF:FF:FF:FF), it is delivered to all stations on the local network. It needs to be used in order for all devices to receive your packet at the datalink layer.
What is an IP address?
An Internet Protocol address (IP address) is a numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication.An IP address serves two main functions: host or network interface identification and location addressing.
Explain subnet mask and given an example
A Subnet mask is a 32-bit number that masks an IP address, and divides the IP address into network address and host address. Subnet Mask is made by setting network bits to all "1"s and setting host bits to all "0"s. Within a given network, two host addresses are reserved for special purpose, and cannot be assigned to hosts. The "0" address is assigned a network address and "255" is assigned to a broadcast address, and they cannot be assigned to hosts.
For Example
| Address Class | No of Network Bits | No of Host Bits | Subnet mask | CIDR notation |
| ------------- | ------------------ | --------------- | --------------- | ------------- |
| A | 8 | 24 | 255.0.0.0 | /8 |
| A | 9 | 23 | 255.128.0.0 | /9 |
| A | 12 | 20 | 255.240.0.0 | /12 |
| A | 14 | 18 | 255.252.0.0 | /14 |
| B | 16 | 16 | 255.255.0.0 | /16 |
| B | 17 | 15 | 255.255.128.0 | /17 |
| B | 20 | 12 | 255.255.240.0 | /20 |
| B | 22 | 10 | 255.255.252.0 | /22 |
| C | 24 | 8 | 255.255.255.0 | /24 |
| C | 25 | 7 | 255.255.255.128 | /25 |
| C | 28 | 4 | 255.255.255.240 | /28 |
| C | 30 | 2 | 255.255.255.252 | /30 |
What is a private IP address? In which scenarios/system designs, one should use it?
What is a public IP address? In which scenarios/system designs, one should use it?
Explain the OSI model. What layers there are? What each layer is responsible for?
- Application: user end (HTTP is here)
- Presentation: establishes context between application-layer entities (Encryption is here)
- Session: establishes, manages and terminates the connections
- Transport: transfers variable-length data sequences from a source to a destination host (TCP & UDP are here)
- Network: transfers datagrams from one network to another (IP is here)
- Data link: provides a link between two directly connected nodes (MAC is here)
- Physical: the electrical and physical spec the data connection (Bits are here)
You can read more about the OSI model in penguintutor.com
For each of the following determine to which OSI layer it belongs:
- Error correction
- Packets routing
- Cables and electrical signals
- MAC address
- IP address
- Terminate connections
- 3 way handshake
What delivery schemes are you familiar with?
Unitcast: One to one communication where there is one sender and one receiver.
Broadcast: Sending a message to everyone in the network. The address ff:ff:ff:ff:ff:ff is used for broadcasting. Two common protocols which use broadcast are ARP and DHCP.
Multicast: Sending a message to a group of subscribers. It can be one-to-many or many-to-many.
What is CSMA/CD? Is it used in modern ethernet networks?
CSMA/CD stands for Carrier Sense Multiple Access / Collision Detection. Its primarily focus it to manage access to shared medium/bus where only one host can transmit at a given point of time.
CSMA/CD algorithm:
- Before sending a frame, it checks whether another host already transmitting a frame.
- If no one transmitting, it starts transmitting the frame.
- If two hosts transmitted at the same time, we have a collision.
- Both hosts stop sending the frame and they send to everyone a 'jam signal' notifying everyone that a collision occurred
- They are waiting for a random time before sending again
- Once each host waited for a random time, they try to send the frame again and so the
Describe the following network devices and the difference between them:
- router
- switch
- hub
How does a router works?
A router is a physical or virtual appliance that passes information between two or more packet-switched computer networks. A router inspects a given data packet's destination Internet Protocol address (IP address), calculates the best way for it to reach its destination and then forwards it accordingly.
What is NAT?
Network Address Translation (NAT) is a process in which one or more local IP address is translated into one or more Global IP address and vice versa in order to provide Internet access to the local hosts.
What is a proxy? How does it works? What do we need it for?
A proxy server acts as a gateway between you and the internet. It’s an intermediary server separating end users from the websites they browse.
If you’re using a proxy server, internet traffic flows through the proxy server on its way to the address you requested. The request then comes back through that same proxy server (there are exceptions to this rule), and then the proxy server forwards the data received from the website to you.
roxy servers provide varying levels of functionality, security, and privacy depending on your use case, needs, or company policy.
What is TCP? How does it works? What is the 3 way handshake?
TCP 3-way handshake or three-way handshake is a process which is used in a TCP/IP network to make a connection between server and client.
A three-way handshake is primarily used to create a TCP socket connection. It works when:
- A client node sends a SYN data packet over an IP network to a server on the same or an external network. The objective of this packet is to ask/infer if the server is open for new connections.
- The target server must have open ports that can accept and initiate new connections. When the server receives the SYN packet from the client node, it responds and returns a confirmation receipt – the ACK packet or SYN/ACK packet.
- The client node receives the SYN/ACK from the server and responds with an ACK packet.
What is round-trip delay or round-trip time?
From wikipedia: "the length of time it takes for a signal to be sent plus the length of time it takes for an acknowledgement of that signal to be received"
Bonus question: what is the RTT of LAN?
How does SSL handshake work?
What is the difference between TCP and UDP?
TCP establishes a connection between the client and the server to guarantee the order of the packages, on the other hand, UDP does not establish a connection between client and server and doesn't handle package order. This makes UDP more lightweight than TCP and a perfect candidate for services like streaming.
Penguintutor.com provides a good explanation.
What TCP/IP protocols are you familiar with?
Explain "default gateway"
A default gateway serves as an access point or IP router that a networked computer uses to send information to a computer in another network or the internet.
What is ARP? How does it works?
ARP stands for Address Resolution Protocol. When you try to ping an IP address on your local network, say 192.168.1.1, your system has to turn the IP address 192.168.1.1 into a MAC address. This involves using ARP to resolve the address, hence its name.
Systems keep an ARP look-up table where they store information about what IP addresses are associated with what MAC addresses. When trying to send a packet to an IP address, the system will first consult this table to see if it already knows the MAC address. If there is a value cached, ARP is not used.
What is TTL? What does it helps to prevent?
What is DHCP? How does it works?
It stands for Dynamic Host Configuration Protocol, and allocates IP addresses, subnet masks and gateways to hosts. This is how it works:
- A host upon entering a network, broadcasts a message in search of a DHCP server (DHCP DISCOVER)
- An offer message is sent back by the DHCP server as a packet containing lease time, subnet mask, IP addresses, etc (DHCP OFFER)
- Depending on which offer accepted, the client sends back a reply broadcast letting all DHCP servers know (DHCP REQUEST)
- Server sends an acknowledgment (DHCP ACK)
Read more here
Can you have two DHCP servers in the same network? How it works?
What is SSL tunneling? How does it works?
What is a socket? Where can you see the list of sockets in your system?
What is IPv6? Why should we consider using it if we have IPv4?
What is VLAN?
What is MTU?
What happens if you send a packet that is bigger than the MTU?
True or False?. Ping is using UDP because it doesn't care about reliable connection
What is SDN?
What is ICMP? What is it used for?
What is NAT? How does it work?
NAT stands for network address translation. It’s a way to map multiple local private addresses to a public one before transferring the information. Organizations that want multiple devices to employ a single IP address use NAT, as do most home routers. For example, your computer's private IP could be 192.168.1.100, but your router maps the traffic to it's public IP (e.g. 1.1.1.1). Any device on the internet would see the traffic coming from your public IP (1.1.1.1) instead of your private IP (192.168.1.100).
Which factors affect network performances
Network - Data and Control planes
What "control plane" refers to?
The control plane is the part of the network that decides how to route and forward packets to a different location.
What "data plane" refers to?
The data plane is the part of the network that actually forwards the data/packets.
What "management plane" refers to?
Refers to monitoring and management functions.
To which plane (data, control, ...) is creating routing tables belongs to?
Control Plane.
Explain Spanning Tree Protocol (STP)
What is link aggregation? Why is it used?
What is Asymmetric Routing? How do deal with it?
What overlay (tunnel) protocols are you familiar with?
What is GRE? How does it works?
What is VXLAN? How does it works?
What is SNAT?
Explain OSPF
What is latency?
What is bandwidth?
What is throughput?
When performing a search query, what is more important, latency or throughput? And how to assure that what managing global infrastructure?
Latency. To have a good latency, a search query should be forwarded to the closest datacenter.
When uploading a video, what is more important, latency or throughput? And how to assure that?
Throughput. To have a good throughput, the upload stream should be routed to an underutilized link.
What other considerations (except latency and throughput) are there when forwarding requests?
- Keep caches updated (which means the request could be forwarded not to the closest datacenter)
Explain Spine & Leaf
What is Network Congestion? What can cause it?
What can you tell me about UDP packet format? What about TCP packet format? How is it different?
What is the exponential backoff algorithm? Where is it used?
Using Hamming code, what would be the code word for the following data word 100111010001101?
00110011110100011101
Give examples of protocols found in the application layer
- Hypertext Transfer Protocol (HTTP) - used for the webpages on the internet
- Simple Mail Transfer Protocol (SMTP) - email transmission
- Telecommunications Network - (TELNET) - terminal emulation to allow client access to telnet server
- File Transfer Protocol (FTP) - facilitates transfer of files between any two machines
- Domain Name System (DNS) - domain name translation
- Dynamic Host Configuration Protocol (DHCP) - allocates IP addresses, subnet masks and gateways to hosts
- Simple Network Management Protocol (SNMP) - gathers data of devices on the network
Give examples of protocols found in the network Layer
- Internet Protocol (IP) - assists in routing packets from one machine to another
- Internet Control Message Protocol (ICMP) - lets one know what is going such as error messages and debugging information
What is HSTS?
HTTP Strict Transport Security is a web server directive that informs user agents and web browsers how to handle its connection through a response header sent at the very beginning and back to the browser. This forces connections over HTTPS encryption, disregarding any script's call to load any resource in that domain over HTTP.
What is the difference if any between SSL and TLS?
Network - Misc
What is the Internet? Is it the same as the World Wide Web?
The internet refers to network of networks, transferring huge amounts of data around the globe.
The World Wide Web is an application running on millions of server, on top of the internet, accessed through what is know as the web browser
What is the ISP?
ISP (Internet Service Provider) is the local internet company provider.
Linux
Linux Master Application
A completely free application for testing your knowledge on Linux
Linux Self Assessment
What is your experience with Linux?
Only you know :)
For example:
- Administration
- Troubleshooting & Debugging
- Storage
- Networking
- Development
- Deployments
Explain what each of the following commands does and give an example on how to use it:
- touch
- ls
- rm
- cat
- cp
- mkdir
- pwd
- cd
- touch - update file's timestamp. More commonly used for creating files
- ls - listing files and directories
- rm - remove files and directories
- cat - create, view and concatenate files
- cp - copy files and directories
- mkdir - create directories
- pwd - print current working directory (= at what path the user currently located)
- cd - change directory
What each of the following commands does?
- cd /
- cd ~
- cd
- cd ..
- cd .
- cd -
- cd / -> change to the root directory
- cd ~ -> change to your home directory
- cd -> change to your home directory
- cd .. -> change to the directory above your current i.e parent directory
- cd . -> change to the directory you currently in
- cd - -> change to the last visited path
Some of the commands in the previous question can be run with the -r/-R flag. What does it do? Give an example to when you would use it
The -r (or -R in some commands) flag allows the user to run a certain command recursively. For example, listing all the files under the following tree is possible when done recursively (ls -R
):
/dir1/ dir2/ file1 file2 dir3/ file3
To list all the files, one can run ls -R /dir1
Explain each field in the output of `ls -l` command
It shows a detailed list of files in a long format. From the left:
- file permissions, number of links, owner name, owner group, file size, timestamp of last modification and directory/file name
What are hidden files/directories? How to list them?
These are files directly not displayed after performing a standard ls direct listing. An example of these files are .bashrc which are used to execute some scripts. Some also store configuration about services on your host like .KUBECONFIG. The command used to list them is, ls -a
What do > and < do in terms of input and output for programs?
They take in input (<) and output for a given file (>) using stdin and stdout.
myProgram < input.txt > executionOutput.txt
Explain what each of the following commands does and give an example on how to use it:
- sed
- grep
- cut
- awk
- sed: a stream editor. Can be used for various purposes like replacing a word in a file:
sed -i s/salad/burger/g
How to rename the name of a file or a directory?
Using the mv
command.
Specify which command would you use (and how) for each of the following scenarios
- Remove a directory with files
- Display the content of a file
- Provides access to the file /tmp/x for everyone
- Change working directory to user home directory
- Replace every occurrence of the word "good" with "great" in the file /tmp/y
rm -rf dir
cat or less
chmod 777 /tmp/x
cd ~
sed -i s/good/great/g /tmp/y
How can you check what is the path of a certain command?
- whereis
- which
What is the difference between these two commands? Will it result in the same output?
echo hello world
echo "hello world"
echo hello world
echo "hello world"
The echo command receives two separate arguments in the first execution and in the second execution it gets one argument which is the string "hello world". The output will be the same.
Explain piping. How do you perform piping?
Using a pipe in Linux, allows you to send the output of one command to the input of another command. For example: cat /etc/services | wc -l
Fix the following commands:
- sed "s/1/2/g' /tmp/myFile
- find . -iname *.yaml -exec sed -i "s/1/2/g" {} ;
sed 's/1/2/g' /tmp/myFile # sed "s/1/2/g" is also fine
find . -iname "*.yaml" -exec sed -i "s/1/2/g" {} \;
How to check which commands you executed in the past?
history command or .bash_history file
Running the command df
you get "command not found". What could be wrong and how to fix it?
Most likely the default/generated $PATH was somehow modified or overridden thus not containing /bin/
where df would normally go.
This issue could also happen if bash_profile or any configuration file of your interpreter was wrongly modified, causing erratics behaviours.
You would solve this by fixing your $PATH variable:
As to fix it there are several options:
- Manually adding what you need to your $PATH
PATH="$PATH":/user/bin:/..etc
- You have your weird env variables backed up.
- You would look for your distro default $PATH variable, copy paste using method #1
Note: There are many ways of getting errors like this: if bash_profile or any configuration file of your interpreter was wrongly modified; causing erratics behaviours, permissions issues, bad compiled software (if you compiled it by yourself)... there is no answer that will be true 100% of the time.
How do you schedule tasks periodically?
You can use the commands cron
and at
.
With cron, tasks are scheduled using the following format:
*/30 * * * * bash myscript.sh
Executes the script every 30 minutes.
The tasks are stored in a cron file, you can write in it using crontab -e
Alternatively if you are using a distro with systemd it's recommended to use systemd timers.
Linux - I/O Redirection
Explain Linux I/O redirection
Demonstrate Linux output redirection
ls > ls_output.txt
Demonstrate Linux stderr output redirection
yippiekaiyay 2> ls_output.txt
Demonstrate Linux stderr to stdout redirection
yippiekaiyay 1>&2
What is the result of running the following command? yippiekaiyay 1>&2 die_hard
An output similar to: yippikaiyay: command not found...
The file die_hard
will not be created
Linux FHS
In Linux FHS (Filesystem Hierarchy Standard) what is the /
?
The root of the filesystem. The beginning of the tree.
What is stored in each of the following paths?
- /bin, /sbin, /usr/bin and /usr/sbin
- /etc
- /home
- /var
- /tmp
- binaries
- configuration files
- home directories of the different users
- files that tend to change and be modified like logs
- temporary files
What is special about the /tmp directory when compared to other directories?
/tmp
folder is cleaned automatically, usually upon reboot.
What kind of information one can find in /proc?
Can you create files in /proc?
In which path can you find the system devices (e.g. block storage)?
Linux - Permissions
How to change the permissions of a file?
Using the chmod
command.
What does the following permissions mean?:
- 777
- 644
- 750
777 - You give the owner, group and other: Execute (1), Write (2) and Read (4); 4+2+1 = 7. 644 - Owner has Read (4), Write (2), 4+2 = 6; Group and Other have Read (4). 750 - Owner has x+r+w, Group has Read (4) and Execute (1); 4+1 = 5. Other have no permissions.
What this command does? chmod +x some_file
It adds execute permissions to all sets i.e user, group and others
Explain what is setgid and setuid
- setuid is a linux file permission that permits a user to run a file or program with the permissions of the owner of that file. This is possible by elevation of current user privileges.
- setgid is a process when executed will run as the group that owns the file.
What is the purpose of sticky bit?
Its a bit that only allows the owner or the root user to delete or modify the file.
What the following commands do?
- chmod
- chown
- chgrp
- chmod - changes access permissions to files system objects
- chown - changes the owner of file system files and directories
- chgrp - changes the group associated with a file system object
What is sudo? How do you set it up?
True or False? In order to install packages on the system one must be the root user or use the sudo command
True
Explain what are ACLs. For what use cases would you recommend to use them?
You try to create a file but it fails. Name at least three different reason as to why it could happen
- No more disk space
- No more inodes
- No permissions
A user accidentally executed the following chmod -x $(which chmod)
. How to fix it?
Linux - systemd
What is systemd?
Systemd is a daemon (System 'd', d stands for daemon).
A daemon is a program that runs in the background without direct control of the user, although the user can at any time talk to the daemon.
systemd has many features such as user processes control/tracking, snapshot support, inhibitor locks..
If we visualize the unix/linux system in layers, systemd would fall directly after the linux kernel.
Hardware -> Kernel -> Daemons, System Libraries, Server Display.
How to start or stop a service?
To start a service: systemctl start <service name>
To stop a service: systemctl stop <service name>
How to check the status of a service?
systemctl status <service name>
On a system which uses systemd, how would you display the logs?
journalctl
Describe how to make a certain process/app a service
Linux - Troubleshooting & Debugging
Where system logs are located?
/var/log
How to follow file's content as it being appended without opening the file every time?
tail -f <file_name>
What are you using for troubleshooting and debugging network issues?
dstat -t
is great for identifying network and disk issues.
netstat -tnlaup
can be used to see which processes are running on which ports.
lsof -i -P
can be used for the same purpose as netstat.
ngrep -d any metafilter
for matching regex against payloads of packets.
tcpdump
for capturing packets
wireshark
same concept as tcpdump but with GUI (optional).
What are you using for troubleshooting and debugging disk & file system issues?
dstat -t
is great for identifying network and disk issues.
opensnoop
can be used to see which files are being opened on the system (in real time).
What are you using for troubleshooting and debugging process issues?
strace
is great for understanding what your program does. It prints every system call your program executed.
What are you using for debugging CPU related issues?
top
will show you how much CPU percentage each process consumes
perf
is a great choice for sampling profiler and in general, figuring out what your CPU cycles are "wasted" on
flamegraphs
is great for CPU consumption visualization (http://www.brendangregg.com/flamegraphs.html)
You get a call from someone claiming "my system is SLOW". What do you do?
- Check with
top
for anything unusual - Run
dstat -t
to check if it's related to disk or network. - Check if it's network related with
sar
- Check I/O stats with
iostat
Explain iostat output
How to debug binaries?
What is the difference between CPU load and utilization?
How you measure time execution of a program?
Linux Kernel
What is a kernel, and what does it do?
The kernel is part of the operating system and is responsible for tasks like:
- Allocating memory
- Schedule processes
- Control CPU
How do you find out which Kernel version your system is using?
uname -a
command
What is a Linux kernel module and how do you load a new module?
Explain user space vs. kernel space
The operating system executes the kernel in protected memory to prevent anyone from changing (and risking it crashing). This is what is known as "Kernel space". "User space" is where users executes their commands or applications. It's important to create this separation since we can't rely on user applications to not tamper with the kernel, causing it to crash.
Applications can access system resources and indirectly the kernel space by making what is called "system calls".
Linux - SSH
What is SSH? How to check if a Linux server is running SSH?
Wikipedia Definition: "SSH or Secure Shell is a cryptographic network protocol for operating network services securely over an unsecured network."
Hostinger.com Definition: "SSH, or Secure Shell, is a remote administration protocol that allows users to control and modify their remote servers over the Internet."
An SSH server will have SSH daemon running. Depends on the distribution, you should be able to check whether the service is running (e.g. systemctl status sshd).
Why SSH is considered better than telnet?
Telnet also allows you to connect to a remote host but as opposed to SSH where the communication is encrypted, in telnet, the data is sent in clear text, so it doesn't considered to be secured because anyone on the network can see what exactly is sent, including passwords.
What is stored in ~/.ssh/known_hosts
?
You try to ssh to a server and you get "Host key verification failed". What does it mean?
It means that the key of the remote host was changed and doesn't match the one that stored on the machine (in ~/.ssh/known_hosts).
What is the difference between SSH and SSL?
What ssh-keygen
is used for?
What is SSH port forwarding?
Linux - Globbing, Wildcards
What is Globbing?
What are wildcards? Can you give an example of how to use them?
Explain what will ls [XYZ]
match
Explain what will ls [^XYZ]
match
Explain what will ls [0-5]
match
What each of the following matches
- ?
- *
- The ? matches any single character
- The * matches zero or more characters
What do we grep for in each of the following commands?:
grep '[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}' some_file
grep -E "error|failure" some_file
grep '[0-9]$' some_file
grep '[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}' some_file
grep -E "error|failure" some_file
grep '[0-9]$' some_file
- An IP address
- The word "error" or "failure"
- Lines which end with a number
Which line numbers will be printed when running `grep '\baaa\b'` on the following content:
aaa
bbb
ccc.aaa
aaaaaa
lines 1 and 3.
What is the difference single and double quotes?
What is escaping? What escape character is used for escaping?
What is an exit code? What exit codes are you familiar with?
An exit code (or return code) represents the code returned by a child process to its parent process.
0 is an exit code which represents success while anything higher than 1 represents error. Each number has different meaning, based on how the application was developed.
I consider this as a good blog post to read more about it: https://shapeshed.com/unix-exit-codes
Linux Boot Process
Tell me everything you know about the Linux boot process
Another way to ask this: what happens from the moment you turned on the server until you get a prompt
What is GRUB2?
What is Secure Boot?
What can you find in /boot?
Linux Disk & Filesystem
What's an inode?
For each file (and directory) in Linux there is an inode, a data structure which stores meta data related to the file like its size, owner, permissions, etc.
Which of the following is not included in inode:
- Link count
- File size
- File name
- File timestamp
File name (it's part of the directory file)
How to check which disks are currently mounted?
Run mount
You run the mount
command but you get no output. How would you check what mounts you have on your system?
cat /proc/mounts
What is the difference between a soft link and hard link?
Hard link is the same file, using the same inode. Soft link is a shortcut to another file, using a different inode.
True or False? You can create an hard link for a directory
False
True or False? You can create a soft link between different filesystems
True
True or False? Directories always have by minimum 2 links
True.
What happens when you delete the original file in case of soft link and hard link?
Can you check what type of filesystem is used in /home?
There are many answers for this question. One way is running df -T
What is a swap partition? What is it used for?
How to create a
- new empty file
- a file with text (without using text editor)
- a file with given size
You are trying to create a new file but you get "File system is full". You check with df for free space and you see you used only 20% of the space. What could be the problem?
How would you check what is the size of a certain directory?
du -sh
What is LVM?
Explain the following in regards to LVM:
- PV
- VG
- LV
What is NFS? What is it used for?
What RAID is used for? Can you explain the differences between RAID 0, 1, 5 and 10?
Describe the process of extending a filesystem disk space
What is lazy umount?
What is tmpfs?
What is stored in each of the following logs?
- /var/log/messages
- /var/log/boot.log
True or False? both /tmp and /var/tmp cleared upon system boot
False. /tmp is cleared upon system boot while /var/tmp is cleared every a couple of days or not cleared at all (depends on distro).
Linux Performance Analysis
How to check what is the current load average?
One can use uptime
or top
You know how to see the load average, great. but what each part of it means? for example 1.43, 2.34, 2.78
This article summarizes the load average topic in a great way
How to check process usage?
pidstat
How to check disk I/O?
iostat -xz 1
How to check how much free memory a system has? How to check memory consumption by each process?
You can use the commands top
and free
How to check TCP stats?
sar -n TCP,ETCP 1
Linux Processes
how to list all the processes running in your system?
ps -ef
How to run a process in the background and why to do that in the first place?
You can achieve that by specifying & at the end of the command. As to why, since some commands/processes can take a lot of time to finish execution or run forever, you may want to run them in the background instead of waiting for them to finish before gaining control again in current session.
How can you find how much memory a specific process consumes?
mem()
{
ps -eo rss,pid,euser,args:100 --sort %mem | grep -v grep | grep -i $@ | awk '{printf $1/1024 "MB"; $1=""; print }'
}
[Source](https://stackoverflow.com/questions/3853655/in-linux-how-to-tell-how-much-memory-processes-are-using)
What signal is used by default when you run 'kill *process id*'?
The default signal is SIGTERM (15). This signal kills process gracefully which means it allows it to save current state configuration.
What signals are you familiar with?
SIGTERM - default signal for terminating a process SIGHUP - common usage is for reloading configuration SIGKILL - a signal which cannot caught or ignored
To view all available signals run kill -l
What kill 0
does?
What kill -0
does?
What is a trap?
Every couple of days, a certain process stops running. How can you look into why it's happening?
What happens when you press ctrl + c?
What is a Daemon in Linux?
A background process. Most of these processes are waiting for requests or set of conditions to be met before actually running anything. Some examples: sshd, crond, rpcbind.
What are the possible states of a process in Linux?
Running (R) Uninterruptible Sleep (D) - The process is waiting for I/O Interruptible Sleep (S) Stopped (T) Dead (x) Zombie (z)
How do you kill a process in D state?
What is a zombie process?
A process which has finished to run but has not exited.
One reason it happens is when a parent process is programmed incorrectly. Every parent process should execute wait() to get the exit code from the child process which finished to run. But when the parent isn't checking for the child exit code, the child process can still exists although it finished to run.
How to get rid of zombie processes?
You can't kill a zombie process the regular way with kill -9
for example as it's already dead.
One way to kill zombie process is by sending SIGCHLD to the parent process telling it to terminate its child processes. This might not work if the parent process wasn't programmed properly. The invocation is kill -s SIGCHLD [parent_pid]
You can also try closing/terminating the parent process. This will make the zombie process a child of init (1) which does periodic cleanups and will at some point clean up the zombie process.
How to find all the
- Processes executed/owned by a certain user
- Process which are Java processes
- Zombie Processes
If you mention at any point ps command with arugments, be familiar with what these arguments does exactly.
What is the init process?
It is the first process executed by the kernel during the booting of a system. It is a daemon process which runs till the system is shutdown. That is why, it is the parent of all the processes
Can you describe how processes are being created?
How to change the priority of a process? Why would you want to do that?
Can you explain how network process/connection is established and how it's terminated?>
What strace
does? What about ltrace
?
Find all the files which end with '.yml' and replace the number 1 in 2 in each file
find /some_dir -iname *.yml -print0 | xargs -0 -r sed -i "s/1/2/g"
You run ls and you get "/lib/ld-linux-armhf.so.3 no such file or directory". What is the problem?
The ls executable is built for an incompatible architecture.
How would you split a 50 lines file into 2 files of 25 lines each?
You can use the split
command this way: split -l 25 some_file
What is a file descriptor? What file descriptors are you familiar with?
Kerberos File descriptor, also known as file handler, is a unique number which identifies an open file in the operating system.
In Linux (and Unix) the first three file descriptors are:
- 0 - the default data stream for input
- 1 - the default data stream for output
- 2 - the default data stream for output related to errors
This is a great article on the topic: https://www.computerhope.com/jargon/f/file-descriptor.htm
What is NTP? What is it used for?
Explain Kernel OOM
Linux Security
What is chroot? In what scenarios would you consider using it?
What is SELiunx?
What is Kerberos?
What is nftables?
What firewalld daemon is responsible for?
Do you have experience with hardening servers? Can you describe the process?
Linux - Networking
How to list all the interfaces?
ip link show
What is the loopback (lo) interface?
The loopback interface is a special, virtual network interface that your computer uses to communicate with itself. It is used mainly for diagnostics and troubleshooting, and to connect to servers running on the local machine.
What the following commands are used for?
- ip addr
- ip route
- ip link
- ping
- netstat
- traceroute
What is a network namespace? What is it used for?
How to check if a certain port is being used?
One of the following would work:
netstat -tnlp | grep <port_number>
lsof -i -n -P | grep <port_number>
How can you turn your Linux server into a router?
What is a virtual IP? In what situation would you use it?
True or False? The MAC address of an interface is assigned/set by the OS
False
Can you have more than one default gateway in a given system?
Technically, yes.
Which port is used in each of the following protocols?:
- SSH
- SMTP
- HTTP
- DNS
- HTTPS
- SSH - 22
- SMTP - 25
- HTTP - 80
- DNS - 53
- HTTPS - 443
What is telnet and why is it a bad idea to use it in production? (or at all)
Telnet is a type of client-server protocol that can be used to open a command line on a remote computer, typically a server. By default, all the data sent and received via telnet is transmitted in clear/plain text, therefore it should not be used as it does not encrypt any data between the client and the server.
What is the routing table? How do you view it?
How can you send an HTTP request from your shell?
Using nc is one way
What are packet sniffers? Have you used one in the past? If yes, which packet sniffers have you used and for what purpose?
It is a network utility that analyses and may inject tasks into the data-stream travelling over the targeted network.
How to list active connections?
How to trigger neighbor discovery in IPv6?
One way would be ping6 ff02::1
What is network interface bonding and do you know how it's performed in Linux?
What network bonding modes are there?
There a couple of modes:
- balance-rr: round robing bonding
- active-backup: a fault tolerance mode where only one is active
- balance-tlb: Adaptive transmit load balancing
- balance-alb: Adaptive load balancing
What is a bridge? How it's added in Linux OS?
Linux DNS
How to check what is the hostname of the system?
cat /etc/hostname
You can also run hostnamectl
or hostname
but that might print only a temporary hostname. The one in the file is the permanent one.
What the file /etc/resolv.conf
is used for? What does it include?
What commands are you using for performing DNS queries (or troubleshoot DNS related issues)?
You can specify one or more of the following:
dig
host
nslookup
Linux - Packaging
Do you have experience with packaging? (as in building packages) Can you explain how does it works?
How packages installation/removal is performed on the distribution you are using?
The answer depends on the distribution being used.
In Fedora/CentOS/RHEL/Rocky it can be done with rpm
or dnf
commands.
In Ubuntu it can be done with the apt
command.
RPM: explain the spec format (what it should and can include)
How do you list the content of a package without actually installing it?
How to know to which package a file on the system belongs to? Is it a problem if it doesn't belongs to any package?
Where repositories are stored? (based on the distribution you are using)
What is an archive? How do you create one in Linux?
How to extract the content of an archive?
Why do we need package managers? Why not simply creating archives and publish them?
Package managers allow you to manage packages lifecycle as in installing, removing and updating the packages.
In addition, you can specify in a spec how a certain package will be installed - where to copy the files, which commands to run prior to the installation, post the installation, etc.
Linux DNF
How to look for a package that provides the command /usr/bin/git? (the package isn't necessarily installed)
dnf provides /usr/bin/git
Linux Applications and Services
What can you find in /etc/services?
How to make sure a Service starts automatically after a reboot or crash?
Depends on the init system.
Systemd: systemctl enable [service_name]
System V: update-rc.d [service_name]
and add this line id:5678:respawn:/bin/sh /path/to/app
to /etc/inittab
Upstart: add Upstart init script at /etc/init/service.conf
You run ssh 127.0.0.1
but it fails with "connection refused". What could be the problem?
- SSH server is not installed
- SSH server is not running
How to print the shared libraries required by a certain program? What is it useful for?
What is CUPS?
What types of web servers are you familiar with?
Nginx, Apache httpd.
Linux Users and Groups
What is a "superuser" (or root user)? How is it different from regular users?
How do you create users? Where user information is stored?
Which file stores information about groups?
How do you change/set the password of a user?
Which file stores users passwords? Is it visible for everyone?
Do you know how to create a new user without using adduser/useradd command?
What information is stored in /etc/passwd? explain each field
How to add a new user to the system without providing him the ability to log-in into the system?
adduser user_name --shell=/bin/false --no-create-home
You can also add a user and then edit /etc/passwd.
How to switch to another user? How to switch to the root user?
su command. Use su - to switch to root
What is the UID the root user? What about a regular user?
What can you do if you lost/forogt the root password?
Re-install the OS IS NOT the right answer :)
What is /etc/skel?
How to see a list of who logged-in to the system?
Using the last
command.
Explain what each of the following commands does:
- useradd
- usermod
- whoami
- id
Linux Hardware
Where can you find information on the processor?
/proc/cpuinfo
How can you print information on the BIOS, motherboard, processor and RAM?
dmidecoode
How can you print all the information on connected block devices in your system?
lsblk
True or False? In user space, applications don't have full access to hardware resources
True. Only in kernel space they have full access to hardware resources.
Linux - Security
How do you create a private key for a CA (certificate authority)?
One way is using openssl this way:
openssl genrsa -aes256 -out ca-private-key.pem 4096
How do you create a public key for a CA (certificate authority)?
openssl req -new -x509 -days 730 -key [private key file name] -sha256 -out ca.pem
If using the private key from the previous question then the command would be:
openssl req -new -x509 -days 730 -key ca-private-key.pem -sha256 -out ca.pem
Linux - Namespaces
What types of namespaces are there in Linux?
- Process ID namespaces: these namespaces include independent set of process IDs
- Mount namespaces: Isolation and control of mountpoints
- Network namespaces: Isolates system networking resources such as routing table, interfaces, ARP table, etc.
- UTS namespaces: Isolate host and domains
- IPC namespaces: Isolates interprocess communications
- User namespaces: Isolate user and group IDs
- Time namespaces: Isolates time machine
True or False? In every PID (Process ID) namespace the first process assigned with the process id number 1
True. Inside the namespace it's PID 1 while to the parent namespace the PID is a different one.
True or False? In a child PID namespace all processes are aware of parent PID namespace and processes and the parent PID namespace has no visibility of child PID namespace processes
False. The opposite is true. Parent PID namespace is aware and has visibility of processes in child PID namespace and child PID namespace has no visibility as to what is going on in the parent PID namespace.
True or False? By default, when creating two separate network namespaces, a ping from one namespace to another will work fine
False. Network namespace has its own interfaces and routing table. There is no way (without creating a bridge for example) for one network namespace to reach another.
True or False? With UTS namespaces, processes may appear as if they run on different hosts and domains while running on the same host
True
True or False? It's not possible to have a root user with ID 0 in child user namespaces
False. In every child user namespace, it's possible to have a separate root user with uid of 0.
What time namespaces are used for?
In time namespaces processes can use different system time.
Linux - Virtualization
What virtualization solutions are available for Linux?
What is KVM?
Linux - AWK
What the awk
command does? Have you used it? What for?
From Wikipedia: "AWK is domain-specific language designed for text processing and typically used as a data extraction and reporting tool"
How to print the 4th column in a file?
awk '{print $4}' file
How to print every line that is longer than 79 characters?
awk 'length($0) > 79' file
What the lsof
command does? Have you used it? What for?
What is the difference between find and locate?
How a user process performs a privileged operation, such as reading from the disk?
Using system calls
Linux - System Calls
What is a system call? What system calls are you familiar with?
How a program executes a system call?
- A program executes a trap instruction. The instruction jump into the kernel while raising the privileged level to kernel space.
- Once in kernel space, it can perform any privileged operation
- Once it's finished, it calls a "return-from-trap" instruction which returns to user space while reducing back the privilege level to user space.
Explain the fork() system call
fork() is used for creating a new process. It does so by cloning the calling process but the child process has its own PID and any memory locks, I/O operations and semaphores are not inherited.
What is the return value of fork()?
- On success, the PID of the child process in parent and 0 in child process
- On error, -1 in the parent
Name one reason for fork() to fail
Not enough memory to create a new process
Why do we need the wait() system call?
wait() is used by a parent process to wait for the child process to finish execution. If wait is not used by a parent process then a child process might become a zombie process.
How the kernel notifies the parent process about child process termination?
The kernel notifies the parent by sending the SIGCHLD to the parent.
How the waitpid() is different from wait()?
The waitpid() is a non-blocking version of the wait() function.
It also supports using library routine (e.g. system()) to wait a child process without messing up with other children processes for which the process has not waited.
True or False? The wait() system call won't return until the child process has run and exited
True in most cases though there are cases where wait() returns before the child exits.
Explain the exec() system call
It transforms the current running program into another program.
Given the name of an executable and some arguments, it loads the code and static data from the specified executable and overwrites its current code segment and current static code data. After initializing its memory space (like stack and heap) the OS runs the program passing any arguments as the argv of that process.
True or False? A successful call to exec() never returns
True
Since a succesful exec replace the current process, it can't return anything to the process that made the call.
What system call is used for listing files?
What system calls are used for creating a new process?
fork(), exec() and the wait() system call is also included in this workflow.
What execve() does?
Executes a program. The program is passed as a filename (or path) and must be a binary executable or a script.
What is the return value of malloc?
Explain the pipe() system call. What does it used for?
"Pipes provide a unidirectional interprocess communication channel. A pipe has a read end and a write end. Data written to the write end of a pipe can be read from the read end of the pipe. A pipe is created using pipe(2), which returns two file descriptors, one referring to the read end of the pipe, the other referring to the write end."
What happens when you execute ls -l
?
-
Shell reads the input using getline() which reads the input file stream and stores into a buffer as a string
-
The buffer is broken down into tokens and stored in an array this way: {"ls", "-l", "NULL"}
-
Shell checks if an expansion is required (in case of ls *.c)
-
Once the program in memory, its execution starts. First by calling readdir()
Notes:
- getline() originates in GNU C library and used to read lines from input stream and stores those lines in the buffer
What happens when you execute ls -l *.log
?
What readdir() system call does?
What exactly the command alias x=y
does?
Why running a new program is done using the fork() and exec() system calls? why a different API wasn't developed where there is one call to run a new program?
This way provides a lot of flexibility. It allows the shell for example, to run code after the call to fork() but before the call to exec(). Such code can be used to alter the environment of the program it about to run.
Describe shortly what happens when you execute a command in the shell
The shell figures out, using the PATH variable, where the executable of the command resides in the filesystem. It then calls fork() to create a new child process for running the command. Once the fork was executed successfully, it calls a variant of exec() to execute the command and finally, waits the command to finish using wait(). When the child completes, the shell returns from wait() and prints out the prompt again.
Linux Filesystem & Files
How to create a file of a certain size?
There are a couple of ways to do that:
- dd if=/dev/urandom of=new_file.txt bs=2MB count=1
- truncate -s 2M new_file.txt
- fallocate -l 2097152 new_file.txt
What does the following block do?:
open("/my/file") = 5
read(5, "file content")
open("/my/file") = 5
read(5, "file content")
These system calls are reading the file /my/file
and 5 is the file descriptor number.
Describe three different ways to remove a file (or its content)
What is the difference between a process and a thread?
What is context switch?
From wikipedia: a context switch is the process of storing the state of a process or thread, so that it can be restored and resume execution at a later point
You found there is a server with high CPU load but you didn't find a process with high CPU. How is that possible?
Linux Advanced - Networking
When you run ip a
you see there is a device called 'lo'. What is it and why do we need it?
What the traceroute
command does? How does it works?
Another common way to task this questions is "what part of the tcp header does traceroute modify?"
What is network bonding? What types are you familiar with?
How to link two separate network namespaces so you can ping an interface on one namespace from the second one?
What are cgroups?
Explain Process Descriptor and Task Structure
What are the differences between threads and processes?
Explain Kernel Threads
What happens when socket system call is used?
This is a good article about the topic: https://ops.tips/blog/how-linux-creates-sockets
You executed a script and while still running, it got accidentally removed. Is it possible to restore the script while it's still running?
Linux Memory
What is the difference between MemFree and MemAvailable in /proc/meminfo?
MemFree - The amount of unused physical RAM in your system MemAvailable - The amount of available memory for new workloads (without pushing system to use swap) based on MemFree, Active(file), Inactive(file), and SReclaimable.
What is the difference between paging and swapping?
Explain what is OOM killer
Distribution
What is a Linux distribution?
What Linux distributions are you familiar with?
What are the components of a Linux distribution?
- Kernel
- Utilities
- Services
- Software/Packages Management
Linux - Sed
Using sed, extract the date from the following line: 201.7.19.90 - - [05/Jun/1985:13:42:99 +0000] "GET /site HTTP/1.1" 200 32421
echo $line | sed 's/.*\[//g;s/].*//g;s/:.*//g'
Linux - Misc
Name 5 commands which are two letters long
ls, wc, dd, df, du, ps, ip, cp, cd ...
What ways are there for creating a new empty file?
- touch new_file
- echo "" > new_file
How `cd -` works? How does it knows the previous location?
$OLDPWD
List three ways to print all the files in the current directory
- ls
- find .
- echo *
How to count the number of lines in a file? What about words?
You define x=2 in /etc/bashrc and x=6 ~/.bashrc you then login to the system. What would be the value of x?
Explain "environment variables". How do you list all environment variables?
What is a TTY device?
How to create your own environment variables?
X=2
for example. But this will persist to new shells. To have it in new shells as well, use export X=2
What a double dash (--) mean?
It's used in commands to mark the end of commands options. One common example is when used with git to discard local changes: git checkout -- some_file
Wildcards are implemented on user or kernel space?
If I plug a new device into a Linux machine, where on the system, a new device entry/file will be created?
/dev
Why there are different sections in man? What is the difference between the sections?
What is User-mode Linux?
Under which license Linux is distributed?
GPL v2
Operating System
Operating System Exercises
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
Fork 101 | Fork | Link | Link | |
Fork 102 | Fork | Link | Link |
Operating System - Self Assessment
What is an operating system?
From the book "Operating Systems: Three Easy Pieces":
"responsible for making it easy to run programs (even allowing you to seemingly run many at the same time), allowing programs to share memory, enabling programs to interact with devices, and other fun stuff like that".
Operating System - Process
Can you explain what is a process?
A process is a running program. A program is one or more instructions and the program (or process) is executed by the operating system.
If you had to design an API for processes in an operating system, what would this API look like?
It would support the following:
- Create - allow to create new processes
- Delete - allow to remove/destroy processes
- State - allow to check the state of the process, whether it's running, stopped, waiting, etc.
- Stop - allow to stop a running process
How a process is created?
- The OS is reading program's code and any additional relevant data
- Program's code is loaded into the memory or more specifically, into the address space of the process.
- Memory is allocated for program's stack (aka run-time stack). The stack also initialized by the OS with data like argv, argc and parameters to main()
- Memory is allocated for program's heap which is required for dynamically allocated data like the data structures linked lists and hash tables
- I/O initialization tasks are performed, like in Unix/Linux based systems where each process has 3 file descriptors (input, output and error)
- OS is running the program, starting from main()
True or False? The loading of the program into the memory is done eagerly (all at once)
False. It was true in the past but today's operating systems perform lazy loading which means only the relevant pieces required for the process to run are loaded first.
What are different states of a process?
- Running - it's executing instructions
- Ready - it's ready to run but for different reasons it's on hold
- Blocked - it's waiting for some operation to complete. For example I/O disk request
What are some reasons for a process to become blocked?
- I/O operations (e.g. Reading from a disk)
- Waiting for a packet from a network
What is Inter Process Communication (IPC)?
What is "time sharing"?
Even when using a system with one physical CPU, it's possible to allow multiple users to work on it and run programs. This is possible with time sharing where computing resources are shared in a way it seems to the user the system has multiple CPUs but in fact it's simply one CPU shared by applying multiprogramming and multi-tasking.
What is "space sharing"?
Somewhat the opposite of time sharing. While in time sharing a resource is used for a while by one entity and then the same resource can be used by another resource, in space sharing the space is shared by multiple entities but in a way where it's not being transferred between them.
It's used by one entity until this entity decides to get rid of it. Take for example storage. In storage, a file is yours until you decide to delete it.
What component determines which process runs at a given moment in time?
CPU scheduler
Operating System - Memory
What is "virtual memory" and what purpose it serves?
What is demand paging?
What is copy-on-write or shadowing?
What is a kernel, and what does it do?
The kernel is part of the operating system and is responsible for tasks like:
- Allocating memory
- Schedule processes
- Control CPU
True or False? Some pieces of the code in the kernel are loaded into protected areas of the memory so applications can't overwritten them
True
What is POSIX?
Explain what is Semaphore and what its role in operating systems
What is cache? What is buffer?
Buffer: Reserved place in RAM which is used to hold data for temporary purposes Cache: Cache is usually used when processes reading and writing to the disk to make the process faster by making similar data used by different programs easily accessible.
Virtualization
What is Virtualization?
Virtualization uses software to create an abstraction layer over computer hardware that allows the hardware elements of a single computer—processors, memory, storage and more - to be divided into multiple virtual computers, commonly called virtual machines (VMs).
What is a hypervisor?
Red Hat: "A hypervisor is software that creates and runs virtual machines (VMs). A hypervisor, sometimes called a virtual machine monitor (VMM), isolates the hypervisor operating system and resources from the virtual machines and enables the creation and management of those VMs."
Read more here
What types of hypervisors are there?
Hosted hypervisors and bare-metal hypervisors.
What are the advantages and disadvantges of bare-metal hypervisor over a hosted hypervisor?
Due to having its own drivers and a direct access to hardware components, a baremetal hypervisor will often have better performances along with stability and scalability.
On the other hand, there will probably be some limitation regarding loading (any) drivers so a hosted hypervisor will usually benefit from having a better hardware compatibility.
What types of virtualization are there?
Operating system virtualization Network functions virtualization Desktop virtualization
Is containerization is a type of Virtualization?
Yes, it's a operating-system-level virtualization, where the kernel is shared and allows to use multiple isolated user-spaces instances.
How the introduction of virtual machines changed the industry and the way applications were deployed?
The introduction of virtual machines allowed companies to deploy multiple business applications on the same hardware while each application is separated from each other in secured way, where each is running on its own separate operating system.
Ansible
Ansible Exercises
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
My First Task | Tasks | Exercise | Solution | |
Upgrade and Update Task | Tasks | Exercise | Solution | |
My First Playbook | Playbooks | Exercise | Solution |
Ansible Self Assesment
Describe each of the following components in Ansible, including the relationship between them:
- Task
- Module
- Play
- Playbook
- Role
Task – a call to a specific Ansible module Module – the actual unit of code executed by Ansible on your own host or a remote host. Modules are indexed by category (database, file, network, …) and also referred to as task plugins.
Play – One or more tasks executed on a given host(s)
Playbook – One or more plays. Each play can be executed on the same or different hosts
Role – Ansible roles allows you to group resources based on certain functionality/service such that they can be easily reused. In a role, you have directories for variables, defaults, files, templates, handlers, tasks, and metadata. You can then use the role by simply specifying it in your playbook.
How Ansible is different from other automation tools? (e.g. Chef, Puppet, etc.)
Ansible is:
- Agentless
- Minimal run requirements (Python & SSH) and simple to use
- Default mode is "push" (it supports also pull)
- Focus on simpleness and ease-of-use
True or False? Ansible follows the mutable infrastructure paradigm
True. In immutable infrastructure approach, you'll replace infrastructure instead of modifying it.
Ansible rather follows the mutable infrastructure paradigm where it allows you to change the configuration of different components, but this approach is not perfect and has its own disadvantges like "configuration drift" where different components may reach different state for different reasons.
True or False? Ansible uses declarative style to describe the expected end state
False. It uses a procedural style.
What kind of automation you wouldn't do with Ansible and why?
While it's possible to provision resources with Ansible, some prefer to use tools that follow immutable infrastructure paradigm. Ansible doesn't saves state by default. So a task that creates 5 instances for example, when executed again will create additional 5 instances (unless additional check is implemented or explicit names are provided) while other tools might check if 5 instances exist. If only 4 exist (by checking the state file for example), one additional instance will be created to reach the end goal of 5 instances.
How do you list all modules and how can you see details on a specific module?
- Ansible online docs
ansible-doc -l
for list of modules andansible-doc [module_name]
for detailed information on a specific module
Ansible - Inventory
What is an inventory file and how do you define one?
An inventory file defines hosts and/or groups of hosts on which Ansible tasks executed upon.
An example of inventory file:
192.168.1.2
192.168.1.3
192.168.1.4
[web_servers]
190.40.2.20
190.40.2.21
190.40.2.22
What is a dynamic inventory file? When you would use one?
A dynamic inventory file tracks hosts from one or more sources like cloud providers and CMDB systems.
You should use one when using external sources and especially when the hosts in your environment are being automatically
spun up and shut down, without you tracking every change in these sources.
Ansible - Variables
Modify the following task to use a variable instead of the value "zlib" and have "zlib" as the default in case the variable is not defined
- name: Install a package
package:
name: "zlib"
state: present
- name: Install a package
package:
name: "zlib"
state: present
- name: Install a package
package:
name: "{{ package_name|default('zlib') }}"
state: present
How to make the variable "use_var" optional?
- name: Install a package
package:
name: "zlib"
state: present
use: "{{ use_var }}"
- name: Install a package
package:
name: "zlib"
state: present
use: "{{ use_var }}"
With "default(omit)"
- name: Install a package
package:
name: "zlib"
state: present
use: "{{ use_var|default(omit) }}"
What would be the result of the following play?
---
- name: Print information about my host
hosts: localhost
gather_facts: 'no'
tasks:
- name: Print hostname
debug:
msg: "It's me, {{ ansible_hostname }}"
When given a written code, always inspect it thoroughly. If your answer is “this will fail” then you are right. We are using a fact (ansible_hostname), which is a gathered piece of information from the host we are running on. But in this case, we disabled facts gathering (gather_facts: no) so the variable would be undefined which will result in failure.
When the value '2017'' will be used in this case: `{{ lookup('env', 'BEST_YEAR') | default('2017', true) }}`?
when the environment variable 'BEST_YEAR' is empty or false.
If the value of certain variable is 1, you would like to use the value "one", otherwise, use "two". How would you do it?
{{ (certain_variable == 1) | ternary("one", "two") }}
The value of a certain variable you use is the string "True". You would like the value to be a boolean. How would you cast it?
{{ some_string_var | bool }}
You want to run Ansible playbook only on specific minor version of your OS, how would you achieve that?
What the "become" directive used for in Ansible?
What are facts? How to see all the facts of a certain host?
What would be the result of running the following task? How to fix it?
- hosts: localhost
tasks:
- name: Install zlib
package:
name: zlib
state: present
- hosts: localhost
tasks:
- name: Install zlib
package:
name: zlib
state: present
Which Ansible best practices are you familiar with?. Name at least three
Explain the directory layout of an Ansible role
What 'blocks' are used for in Ansible?
How do you handle errors in Ansible?
You would like to run a certain command if a task fails. How would you achieve that?
Write a playbook to install ‘zlib’ and ‘vim’ on all hosts if the file ‘/tmp/mario’ exists on the system.
---
- hosts: all
vars:
mario_file: /tmp/mario
package_list:
- 'zlib'
- 'vim'
tasks:
- name: Check for mario file
stat:
path: "{{ mario_file }}"
register: mario_f
- name: Install zlib and vim if mario file exists
become: "yes"
package:
name: "{{ item }}"
state: present
with_items: "{{ package_list }}"
when: mario_f.stat.exists
Write a single task that verifies all the files in files_list variable exist on the host
- name: Ensure all files exist
assert:
that:
- item.stat.exists
loop: "{{ files_list }}"
Write a playbook to deploy the file ‘/tmp/system_info’ on all hosts except for controllers group, with the following content
I'm <HOSTNAME> and my operating system is <OS>
Replace and with the actual data for the specific host you are running on
The playbook to deploy the system_info file
---
- name: Deploy /tmp/system_info file
hosts: all:!controllers
tasks:
- name: Deploy /tmp/system_info
template:
src: system_info.j2
dest: /tmp/system_info
The content of the system_info.j2 template
# {{ ansible_managed }}
I'm {{ ansible_hostname }} and my operating system is {{ ansible_distribution }
The variable 'whoami' defined in the following places:
- role defaults -> whoami: mario
- extra vars (variables you pass to Ansible CLI with -e) -> whoami: toad
- host facts -> whoami: luigi
- inventory variables (doesn’t matter which type) -> whoami: browser
According to variable precedence, which one will be used?
The right answer is ‘toad’.
Variable precedence is about how variables override each other when they set in different locations. If you didn’t experience it so far I’m sure at some point you will, which makes it a useful topic to be aware of.
In the context of our question, the order will be extra vars (always override any other variable) -> host facts -> inventory variables -> role defaults (the weakest).
Here is the order of precedence from least to greatest (the last listed variables winning prioritization):
- command line values (eg “-u user”)
- role defaults [1]
- inventory file or script group vars [2]
- inventory group_vars/all [3]
- playbook group_vars/all [3]
- inventory group_vars/* [3]
- playbook group_vars/* [3]
- inventory file or script host vars [2]
- inventory host_vars/* [3]
- playbook host_vars/* [3]
- host facts / cached set_facts [4]
- play vars
- play vars_prompt
- play vars_files
- role vars (defined in role/vars/main.yml)
- block vars (only for tasks in block)
- task vars (only for the task)
- include_vars
- set_facts / registered vars
- role (and include_role) params
- include params
- extra vars (always win precedence)
A full list can be found at PlayBook Variables . Also, note there is a significant difference between Ansible 1.x and 2.x.
For each of the following statements determine if it's true or false:
- A module is a collection of tasks
- It’s better to use shell or command instead of a specific module
- Host facts override play variables
- A role might include the following: vars, meta, and handlers
- Dynamic inventory is generated by extracting information from external sources
- It’s a best practice to use indention of 2 spaces instead of 4
- ‘notify’ used to trigger handlers
- This “hosts: all:!controllers” means ‘run only on controllers group hosts
Explain the Diffrence between Forks and Serial & Throttle.
Serial
is like running the playbook for each host in turn, waiting for completion of the complete playbook before moving on to the next host. forks
=1 means run the first task in a play on one host before running the same task on the next host, so the first task will be run for each host before the next task is touched. Default fork is 5 in ansible.
[defaults]
forks = 30
- hosts: webservers
serial: 1
tasks:
- name: ...
Ansible also supports throttle
This keyword limits the number of workers up to the maximum set via the forks setting or serial. This can be useful in restricting tasks that may be CPU-intensive or interact with a rate-limiting API
tasks:
- command: /path/to/cpu_intensive_command
throttle: 1
What is ansible-pull? How is it different from how ansible-playbook works?
What is Ansible Vault?
Demonstrate each of the following with Ansible:
- Conditionals
- Loops
What are filters? Do you have experience with writing filters?
Write a filter to capitalize a string
def cap(self, string):
return string.capitalize()
You would like to run a task only if previous task changed anything. How would you achieve that?
What are callback plugins? What can you achieve by using callback plugins?
What is Ansible Collections?
What is the difference between `include_task` and `import_task`?
File '/tmp/exercise' includes the following content
Goku = 9001
Vegeta = 5200
Trunks = 6000
Gotenks = 32
With one task, switch the content to:
Goku = 9001
Vegeta = 250
Trunks = 40
Gotenks = 32
Goku = 9001
Vegeta = 5200
Trunks = 6000
Gotenks = 32
Goku = 9001
Vegeta = 250
Trunks = 40
Gotenks = 32
- name: Change saiyans levels
lineinfile:
dest: /tmp/exercise
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
with_items:
- { regexp: '^Vegeta', line: 'Vegeta = 250' }
- { regexp: '^Trunks', line: 'Trunks = 40' }
...
Ansible - Execution and Strategy
True or False? By default, Ansible will execute all the tasks in play on a single host before proceeding to the next host
False. Ansible will execute a single task on all hosts before moving to the next task in a play. As for today, it uses 5 forks by default.
This behaviour is described as "strategy" in Ansible and it's configurable.
What is a "strategy" in Ansible? What is the default strategy?
A strategy in Ansible describes how Ansible will execute the different tasks on the hosts. By default Ansible is using the "Linear strategy" which defines that each task will run on all hosts before proceeding to the next task.
What strategies are you familiar with in Ansible?
- Linear: the default strategy in Ansible. Run each task on all hosts before proceeding.
- Free: For each host, run all the tasks until the end of the play as soon as possible
- Debug: Run tasks in an interactive way
What the serial
keyword is used for?
It's used to specify the number (or percentage) of hosts to run the full play on, before moving to next number of hosts in the group.
For example:
- name: Some play
hosts: databases
serial: 4
If your group has 8 hosts. It will run the whole play on 4 hosts and then the same play on another 4 hosts.
Ansible Testing
How do you test your Ansible based projects?
What is Molecule? How does it works?
You run Ansibe tests and you get "idempotence test failed". What does it mean? Why idempotence is important?
Ansible - Debugging
How to find out the data type of a certain variable in one of the playbooks?
"{{ some_var | type_debug }}"
Ansible - Collections
What are collections in Ansible?
Terraform
Explain what Terraform is and how does it works
Terraform.io: "Terraform is an infrastructure as code (IaC) tool that allows you to build, change, and version infrastructure safely and efficiently."
Why one would prefer using Terraform and not other technologies? (e.g. Ansible, Puppet, CloudFormation)
A common wrong answer is to say that Ansible and Puppet are configuration management tools and Terraform is a provisioning tool. While technically true, it doesn't mean Ansible and Puppet can't be used for provisioning infrastructure. Also, it doesn't explain why Terraform should be used over CloudFormation if at all.
The benefits of Terraform over the other tools:
- It follows the immutable infrastructure approach which has benefits like avoiding a configuration drift over time
- Ansible and Puppet are more procedural (you mention what to execute in each step) and Terraform is declarative since you describe the overall desired state and not per resource or task. You can give the example of going from 1 to 2 servers in each tool. In Terraform you specify 2, in Ansible and puppet you have to only provision 1 additional server so you need to explicitly make sure you provision only another one server.
How do you structure your Terraform projects?
terraform_directory providers.tf -> List providers (source, version, etc.) variables.tf -> any variable used in other files such as main.tf main.tf -> Lists the resources
True or False? Terraform follows the mutable infrastructure paradigm
False. Terraform follows immutable infrastructure paradigm.
True or False? Terraform uses declarative style to describe the expected end state
True
What is HCL?
HCL stands for Hashicorp Configuration Language. It is the language Hashicorp made to use as the configuration language for a number of its tools, including terraform.
Explain what is "Terraform configuration"
A configuration is a root module along with a tree of child modules that are called as dependencies from the root module.
Explain what the following commands do:
terraform init
terraform plan
terraform validate
terraform apply
terraform init
terraform plan
terraform validate
terraform apply
terraform init
scans your code to figure which providers are you using and download them.
terraform plan
will let you see what terraform is about to do before actually doing it.
terraform validate
checks if configuration is syntactically valid and internally consistent within a directory.
terraform apply
will provision the resources specified in the .tf files.
Terraform - Resources
What is a "resource"?
HashiCorp: "Terraform uses resource blocks to manage infrastructure, such as virtual networks, compute instances, or higher-level components such as DNS records. Resource blocks represent one or more infrastructure objects in your Terraform configuration."
Explain each part of the following line: `resource "aws_instance" "web_server" {...}`
- resource: keyword for defining a resource
- "aws_instance": the type of the resource
- "web_server": the name of the resource
What is the ID of the following resource: `resource "aws_instance" "web_server" {...}`
aws_instance.web_server
True or False? Resource ID must be unique within a workspace
True
Explain each of the following in regards to resources
- Arguments
- Attributes
- Meta-arguments
- Arguments: resource specific configurations
- Attributes: values exposed by the resource in a form of
resource_type.resource_name.attribute_name
. They are set by the provider or API usually. - Meta-arguments: Functions of Terraform to change resource's behaviour
Terraform - Providers
Explain what is a "provider"
terraform.io: "Terraform relies on plugins called "providers" to interact with cloud providers, SaaS providers, and other APIs...Each provider adds a set of resource types and/or data sources that Terraform can manage. Every resource type is implemented by a provider; without providers, Terraform can't manage any kind of infrastructure."
What is the name of the provider in this case: `resource "libvirt_domain" "instance" {...}`
libvirt
Terraform - Variables
What are Input Variables in Terraform? Why one should use them?
Input variables serve as parameters to the module in Terraform. They allow you for example to define once the value of a variable and use that variable in different places in the module so next time you would want to change the value, you will change it in one place instead of changing the value in different places in the module.
How to define variables?
variable "app_id" {
type = string
description = "The id of application"
default = "some_value"
}
Usually they are defined in their own file (vars.tf for example).
How variables are used in modules?
They are referenced with var.VARIABLE_NAME
vars.tf:
variable "memory" {
type = string
default "8192"
}
variable "cpu" {
type = string
default = "4"
}
main.tf:
resource "libvirt_domain" "vm1" {
name = "vm1"
memory = var.memory
cpu = var.cpu
}
How would you enforce users that use your variables to provide values with certain constraints? For example, a number greater than 1
Using validation
block
variable "some_var" {
type = number
validation {
condition = var.some_var > 1
error_message = "you have to specify a number greater than 1"
}
}
What is the effect of setting variable as "sensitive"?
It doesn't show its value when you run terraform apply
or terraform plan
but eventually it's still recorded in the state file.
True or Fales? If an expression's result depends on a sensitive variable, it will be treated as sensitive as well
True
The same variable is defined in the following places:
- The file
terraform.tfvars
- Environment variable
- Using
-var
or -var-file
According to varaible precedence, which source will be used first?
terraform.tfvars
-var
or -var-file
The order is:
- Environment variable
- The file
terraform.tfvars
- Using
-var
or-var-file
What other way is there to define lots of variables in more "simplified" way?
Using .tfvars
file which contains variable consists of simple variable names assignments this way:
x = 2
y = "mario"
z = "luigi"
Terraform - State
What terraform.tfstate
file is used for?
It keeps track of the IDs of created resources so that Terraform knows what it's managing.
How do you rename an existing resource?
terraform state mv
Why does it matter where you store the tfstate file? Where would you store it?
- tfstate contains credentials in plain text. You don't want to put it in publicly shared location
- tfstate shouldn't be modified concurrently so putting it in a shared location available for everyone with "write" permissions might lead to issues. (Terraform remote state doesn't has this problem).
- tfstate is in important file. As such, it might be better to put it in a location that has regular backups.
As such, tfstate shouldn't be stored in git repositories. secured storage such as secured buckets, is a better option.
Which command is responsible for creating state file?
- terraform apply file.terraform
- Above command will create tfstate file in the working folder.
By default where does the state get stored?
- The state is stored by default in a local file named terraform.tfstate.
Can we store tfstate file at remote location? If yes, then in which condition you will do this?
- Yes, It can also be stored remotely, which works better in a team environment. Given condition that remote location is not publicly accessible since tfstate file contain sensitive information as well. Access to this remote location must be only shared with team members.
Mention some best practices related to tfstate
- Don't edit it manually. tfstate was designed to be manipulated by terraform and not by users directly.
- Store it in secured location (since it can include credentials and sensitive data in general)
- Backup it regularly so you can roll-back easily when needed
- Store it in remote shared storage. This is especially needed when working in a team and the state can be updated by any of the team members
- Enabled versioning if the storage where you store the state file, supports it. Versioning is great for backups and roll-backs in case of an issue.
How and why concurrent edits of the state file should be avoided?
If there are two users or processes concurrently editing the state file it can result in invalid state file that doesn't actually represents the state of resources.
To avoid that, Terraform can apply state locking if the backend supports that. For example, AWS s3 supports state locking and consistency via DynamoDB. Often, if the backend support it, Terraform will make use of state locking automatically so nothing is required from the user to activate it.
Describe how you manage state file(s) when you have multiple environments (e.g. development, staging and production)
There is no right or wrong here, but it seems that the overall preferred way is to have a dedicated state file per environment.
How to write down a variable which changes by an external source or during terraform apply
?
You use it this way: variable “my_var” {}
You've deployed a virtual machine with Terraform and you would like to pass data to it (or execute some commands). Which concept of Terraform would you use?
Terraform - Provisioners
What are "Provisioners"? What they are used for?
Provisioners used to execute actions on local or remote machine. It's extremely useful in case you provisioned an instance and you want to make a couple of changes in the machine you've created without manually ssh into it after Terraform finished to run and manually run them.
What is local-exec
and remote-exec
in the context of provisioners?
What is a "tainted resource"?
It's a resource which was successfully created but failed during provisioning. Terraform will fail and mark this resource as "tainted".
What terraform taint
does?
terraform taint resource.id
manually marks the resource as tainted in the state file. So when you run terraform apply
the next time, the resource will be destroyed and recreated.
What types of variables are supported in Terraform?
string number bool list() set() map() object({<ATTR_NAME> = , ... }) tuple([, ...])
What is a data source? In what scenarios for example would need to use it?
Data sources lookup or compute values that can be used elsewhere in terraform configuration.
There are quite a few cases you might need to use them:
- you want to reference resources not managed through terraform
- you want to reference resources managed by a different terraform module
- you want to cleanly compute a value with typechecking, such as with
aws_iam_policy_document
What are output variables and what terraform output
does?
Output variables are named values that are sourced from the attributes of a module. They are stored in terraform state, and can be used by other modules through
remote_state
Explain Modules
A Terraform module is a set of Terraform configuration files in a single directory. Modules are small, reusable Terraform configurations that let you manage a group of related resources as if they were a single resource. Even a simple configuration consisting of a single directory with one or more .tf files is a module. When you run Terraform commands directly from such a directory, it is considered the root module. So in this sense, every Terraform configuration is part of a module.
What is the Terraform Registry?
The Terraform Registry provides a centralized location for official and community-managed providers and modules.
Explain remote-exec
and local-exec
Explain "Remote State". When would you use it and how?
Terraform generates a `terraform.tfstate` json file that describes components/service provisioned on the specified provider. Remote State stores this file in a remote storage media to enable collaboration amongst team.
Explain "State Locking"
State locking is a mechanism that blocks an operations against a specific state file from multiple callers so as to avoid conflicting operations from different team members. Once the first caller's operation's lock is released the other team member may go ahead to carryout his own operation. Nevertheless Terraform will first check the state file to see if the desired resource already exist and if not it goes ahead to create it.
What is the "Random" provider? What is it used for
The random provider aids in generating numeric or alphabetic characters to use as a prefix or suffix for a desired named identifier.
How do you test a terraform module?
Many examples are acceptable, but the most common answer would likely to be using the tool
terratest
, and to test that a module can be initialized, can create resources, and can destroy those resources cleanly.
Aside from .tfvars
files or CLI arguments, how can you inject dependencies from other modules?
The built-in terraform way would be to use
remote-state
to lookup the outputs from other modules.
It is also common in the community to use a tool called terragrunt
to explicitly inject variables between modules.
What is Terraform import?
Terraform import is used to import existing infrastucture. It allows you to bring resources created by some other means (eg. manually launched cloud resources) and bring it under Terraform management.
How do you import existing resource using Terraform import?
- Identify which resource you want to import.
- Write terraform code matching configuration of that resource.
- Run terraform command
terraform import RESOURCE ID
eg. Let's say you want to import an aws instance. Then you'll perform following:
- Identify that aws instance in console
- Refer to it's configuration and write Terraform code which will look something like:
resource "aws_instance" "tf_aws_instance" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
tags = {
Name = "import-me"
}
}
- Run terraform command
terraform import aws_instance.tf_aws_instance i-12345678
Containers
Containers Exercises
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
Running Containers | Intro | Exercise | Solution | |
Working with Images | Image | Exercise | Solution | |
My First Dockerfile | Dockerfile | Exercise | ||
Run, Forest, Run! | Restart Policies | Exercise | Solution | |
Layer by Layer | Image Layers | Exercise | Solution | |
Containerize an application | Containerization | Exercise | Solution | |
Multi-Stage Builds | Multi-Stage Builds | Exercise | Solution |
Containers Self Assesment
What is a Container?
This can be tricky to answer since there are many ways to create a containers:
- Docker
- systemd-nspawn
- LXC
If to focus on OCI (Open Container Initiative) based containers, it offers the following definition: "An environment for executing processes with configurable isolation and resource limitations. For example, namespaces, resource limits, and mounts are all part of the container environment."
Why containers are needed? What is their goal?
OCI provides a good explanation: "Define a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in a format that is self-describing and portable, so that any compliant runtime can run it without extra dependencies, regardless of the underlying machine and the contents of the container."
How are containers different from virtual machines (VMs)?
The primary difference between containers and VMs is that containers allow you to virtualize multiple workloads on a single operating system while in the case of VMs, the hardware is being virtualized to run multiple machines each with its own guest OS. You can also think about it as containers are for OS-level virtualization while VMs are for hardware virtualization.
- Containers don't require an entire guest operating system as VMs. Containers share the system's kernel as opposed to VMs. They isolate themselves via the use of kernel's features such as namespaces and cgroups
- It usually takes a few seconds to set up a container as opposed to VMs which can take minutes or at least more time than containers as there is an entire OS to boot and initialize as opposed to containers which has share of the underlying OS
- Virtual machines considered to be more secured than containers
- VMs portability considered to be limited when compared to containers
Do we need virtual machines in the edge of containers? Are they still relevant?
In which scenarios would you use containers and in which you would prefer to use VMs?
You should choose VMs when:
- You need run an application which requires all the resources and functionalities of an OS
- You need full isolation and security
You should choose containers when:
- You need a lightweight solution
- Running multiple versions or instances of a single application
Describe the process of containerizing an application
- Write a Dockerfile that includes your app (including the commands to run it) and its dependencies
- Build the image using the Dockefile you wrote
- You might want to push the image to a registry
- Run the container using the image you've built
Containers - OCI
What is the OCI?
OCI (Open Container Initiative) is an open governance established in 2015 to standardize container creation - mostly image format and runtime. At that time there were a number of parties involved and the most prominent one was Docker.
Specifications published by OCI:
Which operations OCI based containers must support?
Create, Kill, Delete, Start and Query State.
Containers - Basic Commands
How to list all the containers on a given host?
In the case of Docker, use: docker container ls
In the case of Podman, it's not very different: podman container ls
How to run a container?
Docker: docker container run ubuntu
Podman: podman container run ubuntu
Why after running podman container run ubuntu
the output of podman container ls
is empty?
Because the container immediately exits after running the ubuntu image. This is completely normal and expected as containers designed to run a service or a app and exit when they are done running it.
If you want the container to keep running, you can run a command like sleep 100
which will run for 100 seconds or you can attach to terminal of the container with a command similar: podman container run -it ubuntu /bin/bash
How to attach your shell to a terminal of a running container?
podman container exec -it [container id/name] bash
This can be done in advance while running the container: podman container run -it [image:tag] /bin/bash
True or False? You can remove a running container if it doesn't running anything
False. You have to stop the container before removing it.
How to stop and remove a container?
podman container stop <container id/name> && podman container rm <container id/name>
What happens when you run docker container run ubuntu
?
- Docker client posts the command to the API server running as part of the Docker daemon
- Docker daemon checks if a local image exists
- If it exists, it will use it
- If doesn't exists, it will go to the remote registry (Docker Hub by default) and pull the image locally
- containerd and runc are instructed (by the daemon) to create and start the container
How to run a container in the background?
With the -d flag. It will run in the background and will not attach it to the terminal.
docker container run -d httpd
or podman container run -d httpd
Containers - Images
What is a container image?
- An image of a container contains the application, its dependencies and the operating system where the application is executed.
- It's a collection of read-only layers. These layers are loosely coupled
- Each layer is assembled out of one or more files
Why container images are relatively small?
- Most of the images don't contain Kernel. They share and access the one used by the host on which they are running
- Containers intended to run specific application in most cases. This means they hold only what the application needs in order to run
How to list the container images on certain host?
podman image ls
docker image ls
Depends on which containers engine you use.
How the centralized location, where images are stored, is called?
Registry
A registry contains one or more ____
which in turn contain one or more ____
A registry contains one or more repositories which in turn contain one or more images.
How to find out which registry do you use by default from your environment?
Depends on the containers technology you are using. For example, in case of Docker, it can be done with docker info
> docker info
Registry: https://index.docker.io/v1
How to retrieve the latest ubuntu image?
docker image pull ubuntu:latest
True or False? It's not possible to remove an image if a certain container is using it
True. You should stop and remove the container before trying to remove the image it uses.
True or False? If a tag isn't specified when pulling an image, the 'latest' tag is being used
True
True or False? Using the 'latest' tag when pulling an image means, you are pulling the most recently published image
False. While this might be true in some cases, it's not guaranteed that you'll pull the latest published image when using the 'latest' tag.
For example, in some images, 'edge' tag is used for the most recently published images.
Where pulled images are stored?
Depends on the container technology being used. For example, in case of Docker, images are stored in /var/lib/docker/
Explain container image layers
- The layers of an image is where all the content is stored - code, files, etc.
- Each layer is independent
- Each layer has an ID that is an hash based on its content
- The layers (as the image) are immutable which means a change to one of the layers can be easily identified
True or False? Changing the content of any of the image layers will cause the hash content of the image to change
True. These hashes are content based and since images (and their layers) are immutable, any change will cause the hashes to change.
How to list the layers of an image?
In case of Docker, you can use docker image inspect <name>
True or False? In most cases, container images contain their own kernel
False. They share and access the one used by the host on which they are running.
True or False? A single container image can have multiple tags
True. When listing images, you might be able to see two images with the same ID but different tags.
What is a dangling image?
It's an image without tags attached to it. One way to reach this situation is by building an image with exact same name and tag as another already existing image. It can be still referenced by using its full SHA.
How to see changes done to a given image over time?
In the case of Docker, you could use docker history <name>
True or False? Multiple images can share layers
True.
One evidence for that can be found in pulling images. Sometimes when you pull an image, you'll see a line similar to the following:
fa20momervif17: already exists
This is because it recognizes such layer already exists on the host, so there is no need to pull the same layer twice.
What is the digest of an image? What problem does it solves?
Tags are mutable. This is mean that we can have two different images with the same name and the same tag. It can be very confusing to see two images with the same name and the same tag in your environment. How would you know if they are truly the same or are they different?
This is where "digests` come handy. A digest is a content-addressable identifier. It isn't mutable as tags. Its value is predictable and this is how you can tell if two images are the same content wise and not merely by looking at the name and the tag of the images.
True or False? A single image can support multiple architectures (Linux x64, Windows x64, ...)
True.
What is a distribution hash in regards to layers?
- Layers are compressed when pushed or pulled
- distribution hash is the hash of the compressed layer
- the distribution hash used when pulling or pushing images for verification (making sure no one tempered with image or layers)
- It's also used for avoiding ID collisions (a case where two images have exactly the same generated ID)
How multi-architecture images work? Explain by describing what happens when an image is pulled
- A client makes a call to the registry to use a specific image (using an image name and optionally a tag)
- A manifest list is parsed (assuming it exists) to check if the architecture of the client is supported and available as a manifest
- If it is supported (a manifest for the architecture is available) the relevant manifest is parsed to obtain the IDs of the layers
- Each layer is then pulled using the obtained IDs from the previous step
How to check which architectures a certain container image supports?
docker manifest inspect <name>
How to check what a certain container image will execute once we'll run a container based on that image?
Look for "Cmd" or "Entrypoint" fields in the output of docker image inspec <image name>
How to view the instructions that were used to build image?
docker image history <image name>:<tag>
How docker image build
works?
- Docker spins up a temporary container
- Runs a single instruction in the temporary container
- Stores the result as a new image layer
- Remove the temporary container
- Repeat for every instruction
What is the role of cache in image builds?
When you build an image for the first time, the different layers are being cached. So, while the first build of the image might take time, any other build of the same image (given that Dockerfile didn't change or the content used by the instructions) will be instant thanks to the caching mechanism used.
In little bit more details, it works this way:
- The first instruction (FROM) will check if base image already exists on the host before pulling it
- For the next instruction, it will check in the build cache if an existing layer was built from the same base image + if it used the same instruction
- If it finds such layer, it skips the instruction and links the existing layer and it keeps using the cache.
- If it doesn't find a matching layer, it builds the layer and the cache is invalidated.
Note: in some cases (like COPY and ADD instructions) the instruction might stay the same but if the content of what being copied is changed then the cache is invalidated. The way this check is done is by comparing the checksum of each file that is being copied.
What ways are there to reduce container images size?
- Reduce number of instructions - in some case you may be able to join layers by installing multiple packages with one instructions for example or using
&&
to concatenate RUN instructions - Using smaller images - in some cases you might be using images that contain more than what is needed for your application to run. It is good to get overview of some images and see whether you can use smaller images that you are usually using.
- Cleanup after running commands - some commands, like packages installation, create some metadata or cache that you might not need for running the application. It's important to clean up after such commands to reduce the image size
- For Docker images, you can use multi-stage builds
What are the pros and cons of squashing images?
Pros:
- Smaller image
- Reducing number of layers (especially if the image has lot of layers) Cons:
- No sharing of the image layers
- Push and pull can take more time (because no matching layers found on target)
Containers - Volume
How to create a new volume?
docker volume create some_volume
Containers - Dockerfile
What is a Dockerfile?
Different container engines (e.g. Docker, Podman) can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text file that contains all the instructions for building an image which containers can use.
What is the instruction in all Dockefiles and what does it mean?
The first instruction is FROM <image name>
It specifies the base layer of the image to be used. Every other instruction is a layer on top of that base image.
List five different instructions that are available for use in a Dockerfile
- WORKDIR: sets the working directory inside the image filesystems for all the instructions following it
- EXPOSE: exposes the specified port (it doesn't adds a new layer, rather documented as image metadata)
- ENTRYPOINT: specifies the startup commands to run when a container is started from the image
- ENV: sets an environment variable to the given value
- USER: sets the user (and optionally the user group) to use while running the image
What are some of the best practices regarding container images and Dockerfiles that you are following?
- Include only the packages you are going to use. Nothing else.
- Specify a tag in FROM instruction. Not using a tag means you'll always pull the latest, which changes over time and might result in unexpected result.
- Do not use environment variables to share secrets
- Use images from official repositories
- Keep images small! - you want them only to include what is required for the application to run successfully. Nothing else.
- If are using the apt package manager, you might use 'no-install-recommends' with
apt-get install
to install only main dependencies (instead of suggested, recommended packages)
What is the "build context"?
Docker docs: "A build’s context is the set of files located in the specified PATH or URL"
What is the difference between ADD and COPY in Dockerfile?
COPY takes in a source and destination. It lets you copy in a file or directory from the build context into the Docker image itself.
ADD lets you do the same, but it also supports two other sources. You can use a URL instead of a file or directory from the build context. In addition, you can extract a tar file from the source directly into the destination.
Although ADD and COPY are functionally similar, generally speaking, COPY is preferred. That’s because it’s more transparent than ADD. COPY only supports the basic copying of files from build context into the container, while ADD has some features (like local-only tar extraction and remote URL support) that are not immediately obvious.
What is the difference between CMD and RUN in Dockerfile?
RUN lets you execute commands inside of your Docker image. These commands get executed once at build time and get written into your Docker image as a new layer. CMD is the command the container executes by default when you launch the built image. A Dockerfile can only have one CMD. You could say that CMD is a Docker run-time operation, meaning it’s not something that gets executed at build time. It happens when you run an image. A running image is called a container.
How to create a new image using a Dockerfile?
The following command is executed from within the directory where Dockefile resides:
docker image build -t some_app:latest .
podman image build -t some_app:latest .
Do you perform any checks or testing on your Dockerfiles?
One option is to use hadolint project which is a linter based on Dockerfile best practices.
Which instructions in Dockerfile create new layers?
Instructions such as FROM, COPY and RUN, create new image layers instead of just adding metadata.
Which instructions in Dockerfile create image metadata and don't create new layers?
Instructions such as ENTRYPOINT, ENV, EXPOSE, create image metadata and they don't create new layers.
Is it possible to identify which instruction create a new layer from the output of docker image history
?
Containers - Architecture
How container achieve isolation from the rest of the system?
Through the use of namespaces and cgroups. Linux kernel has several types of namespaces:
- Process ID namespaces: these namespaces include independent set of process IDs
- Mount namespaces: Isolation and control of mountpoints
- Network namespaces: Isolates system networking resources such as routing table, interfaces, ARP table, etc.
- UTS namespaces: Isolate host and domains
- IPC namespaces: Isolates interprocess communications
- User namespaces: Isolate user and group IDs
- Time namespaces: Isolates time machine
Describe in detail what happens when you run `podman/docker run hello-world`?
Docker/Podman CLI passes your request to Docker daemon. Docker/Podman daemon downloads the image from Docker Hub Docker/Podman daemon creates a new container by using the image it downloaded Docker/Podman daemon redirects output from container to Docker CLI which redirects it to the standard output
Describe difference between cgroups and namespaces
cgroup: Control Groups provide a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour. namespace: wraps a global system resource in an abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of the global resource.
In short:
Cgroups = limits how much you can use; namespaces = limits what you can see (and therefore use)
Cgroups involve resource metering and limiting: memory CPU block I/O network
Namespaces provide processes with their own view of the system
Multiple namespaces: pid,net, mnt, uts, ipc, user
Containers - Docker Architecture
Which components/layers compose the Docker technology?
- Runtime - responsible for starting and stopping containers
- Daemon - implements the Docker API and takes care of managing images (including builds), authentication, security, networking, etc.
- Orchestrator
What components are part of the Docker engine?
- Docker daemon
- containerd
- runc
What is the low-level runtime?
- The low level runtime is called runc
- It manages every container running on Docker host
- Its purpose is to interact with the underlying OS to start and stop containers
- Its reference implementation is of the OCI (Open Containers Initiative) container-runtime-spec
- It's a small CLI wrapper for libcontainer
What is the high-level runtime?
- The high level runtime is called containerd
- It was developed by Docker Inc and at some point donated to CNCF
- It manages the whole lifecycle of a container - start, stop, remove and pause
- It take care of setting up network interfaces, volume, pushing and pulling images, ...
- It manages the lower level runtime (runc) instances
- It's used both by Docker and Kubernetes as a container runtime
- It sits between Docker daemon and runc at the OCI layer
Note: running ps -ef | grep -i containerd
on a system with Docker installed and running, you should see a process of containerd
True or False? The docker daemon (dockerd) performs lower-level tasks compared to containerd
False. The Docker daemon performs higher-level tasks compared to containerd.
It's responsible for managing networks, volumes, images, ...
Describe in detail what happens when you run `docker pull image:tag`?
Docker CLI passes your request to Docker daemon. Dockerd Logs shows the process
docker.io/library/busybox:latest resolved to a manifestList object with 9 entries; looking for a unknown/amd64 match
found match for linux/amd64 with media type application/vnd.docker.distribution.manifest.v2+json, digest sha256:400ee2ed939df769d4681023810d2e4fb9479b8401d97003c710d0e20f7c49c6
pulling blob "sha256:61c5ed1cbdf8e801f3b73d906c61261ad916b2532d6756e7c4fbcacb975299fb Downloaded 61c5ed1cbdf8 to tempfile /var/lib/docker/tmp/GetImageBlob909736690
Applying tar in /var/lib/docker/overlay2/507df36fe373108f19df4b22a07d10de7800f33c9613acb139827ba2645444f7/diff" storage-driver=overlay2
Applied tar sha256:514c3a3e64d4ebf15f482c9e8909d130bcd53bcc452f0225b0a04744de7b8c43 to 507df36fe373108f19df4b22a07d10de7800f33c9613acb139827ba2645444f7, size: 1223534
Describe in detail what happens when you run a container
- The Docker client converts the run command into an API payload
- It then POST the payload to the API endpoint exposed by the Docker daemon
- When the daemon receives the command to create a new container, it makes a call to containerd via gRPC
- containerd converts the required image into an OCI bundle and tells runc to use that bundle for creating the container
- runc interfaces with the OS kernel to pull together the different constructs (namespace, cgroups, etc.) used for creating the container
- Container process is started as a child-process of runc
- Once it starts, runc exists
True or False? Killing the Docker daemon will kill all the running containers
False. While this was true at some point, today the container runtime isn't part of the daemon (it's part of containerd and runc) so stopping or killing the daemon will not affect running containers.
True or False? containerd forks a new instance runc for every container it creates
True
True or False? Running a dozen of containers will result in having a dozen of runc processes
False. Once a container is created, the parent runc process exists.
What is shim in regards to Docker?
shim is the process that becomes the container's parent when runc process exists. It's responsible for:
- Reporting exit code back to the Docker daemon
- Making sure the container doesn't terminate if the daemon is being restarted. It does so by keeping the stdout and stdin open
What `podman commit` does?. When will you use it?
Create a new image from a container’s changes
How would you transfer data from one container into another?
What happens to data of the container when a container exists?
Explain what each of the following commands do:
- docker run
- docker rm
- docker ps
- docker pull
- docker build
- docker commit
How do you remove old, non running, containers?
- To remove one or more Docker images use the docker container rm command followed by the ID of the containers you want to remove.
- The docker system prune command will remove all stopped containers, all dangling images, and all unused networks
- docker rm $(docker ps -a -q) - This command will delete all stopped containers. The command docker ps -a -q will return all existing container IDs and pass them to the rm command which will delete them. Any running containers will not be deleted.
How the Docker client communicates with the daemon?
Via the local socket at /var/run/docker.sock
Explain Docker interlock
What is Docker Repository?
Explain image layers
A Docker image is built up from a series of layers. Each layer represents an instruction in the image’s Dockerfile. Each layer except the very last one is read-only. Each layer is only a set of differences from the layer before it. The layers are stacked on top of each other. When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer. The major difference between a container and an image is the top writable layer. All writes to the container that add new or modify existing data are stored in this writable layer. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged. Because each container has its own writable container layer, and all changes are stored in this container layer, multiple containers can share access to the same underlying image and yet have their own data state.
What best practices are you familiar related to working with containers?
How do you manage persistent storage in Docker?
How can you connect from the inside of your container to the localhost of your host, where the container runs?
How do you copy files from Docker container to the host and vice versa?
Containers - Docker Compose
Explain what is Docker compose and what is it used for
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
For example, you can use it to set up ELK stack where the services are: elasticsearch, logstash and kibana. Each running in its own container.
In general, it's useful for running applications which composed out of several different services. It let's you manage it as one deployed app, instead of different multiple separate services.
Describe the process of using Docker Compose
- Define the services you would like to run together in a docker-compose.yml file
- Run
docker-compose up
to run the services
Containers - Docker Images
What is Docker Hub?
One of the most common registries for retrieving images.
How to push an image to Docker Hub?
docker image push [username]/[image name]:[tag]
For example:
docker image mario/web_app:latest
What is the difference between Docker Hub and Docker cloud?
Docker Hub is a native Docker registry service which allows you to run pull and push commands to install and deploy Docker images from the Docker Hub.
Docker Cloud is built on top of the Docker Hub so Docker Cloud provides you with more options/features compared to Docker Hub. One example is Swarm management which means you can create new swarms in Docker Cloud.
Explain Multi-stage builds
Multi-stages builds allow you to produce smaller container images by splitting the build process into multiple stages.
As an example, imagine you have one Dockerfile where you first build the application and then run it. The whole build process of the application might be using packages and libraries you don't really need for running the application later. Moreover, the build process might produce different artifacts which not all are needed for running the application.
How do you deal with that? Sure, one option is to add more instructions to remove all the unnecessary stuff but, there are a couple of issues with this approach:
- You need to know what to remove exactly and that might be not as straightforward as you think
- You add new layers which are not really needed
A better solution might be to use multi-stage builds where one stage (the build process) is passing the relevant artifacts/outputs to the stage that runs the application.
True or False? In multi-stage builds, artifacts can be copied between stages
True. This allows us to eventually produce smaller images.
What .dockerignore
is used for?
By default, Docker uses everything (all the files and directories) in the directory you use as build context.
.dockerignore
used for excluding files and directories from the build context
Containers - Networking
What container network standards or architectures are you familiar with?
CNM (Container Network Model):
- Requires distrubited key value store (like etcd for example) for storing the network configuration
- Used by Docker CNI (Container Network Interface):
- Network configuration should be in JSON format
Containers - Docker Networking
What network specification Docker is using and how its implementation is called?
Docker is using the CNM (Container Network Model) design specification.
The implementation of CNM specification by Docker is called "libnetwork". It's written in Go.
Explain the following blocks in regards to CNM:
-
Networks
-
Endpoints
-
Sandboxes
Networks
Endpoints
Sandboxes
Networks: software implementation of an switch. They used for grouping and isolating a collection of endpoints.
Endpoints: Virtual network interfaces. Used for making connections.
Sandboxes: Isolated network stack (interfaces, routing tables, ports, ...)
True or False? If you would like to connect a container to multiple networks, you need multiple endpoints
True. An endpoint can connect only to a single network.
What are some features of libnetwork?
- Native service discovery
- ingress-based load balancer
- network control plane and management plane
Containers - Security
What security best practices are there regarding containers?
- Install only the necessary packages in the container
- Don't run containers as root when possible
- Don't mount the Docker daemon unix socket into any of the containers
- Set volumes and container's filesystem to read only
- DO NOT run containers with
--privilged
flag
A container can cause a kernel panic and bring down the whole host. What preventive actions can you apply to avoid this specific situation?
- Install only the necessary packages in the container
- Set volumes and container's filesystem to read only
- DO NOT run containers with
--privilged
flag
Containers - Docker in Production
What are some best practices you following in regards to using containers in production?
Images:
- Use images from official repositories
- Include only the packages you are going to use. Nothing else.
- Specify a tag in FROM instruction. Not using a tag means you'll always pull the latest, which changes over time and might result in unexpected result.
- Do not use environment variables to share secrets
- Keep images small! - you want them only to include what is required for the application to run successfully. Nothing else. Components:
- Secured connection between components (e.g. client and server)
True or False? It's recommended for production environments that Docker client and server will communicate over network using HTTP socket
False. Communication between client and server shouldn't be done over HTTP since it's insecure. It's better to enforce the daemon to only accept network connection that are secured with TLS.
Basically, the Docker daemon will only accept secured connections with certificates from trusted CA.
What forms of self-healing options available for Docker containers?
Restart Policies. It allows you to automatically restart containers after certain events.
What restart policies are you familiar with?
- always: restart the container when it's stopped (not with
docker container stop
) - unless-stopped: restart the container unless it was in stopped status
- no: don't restart the container at any point (default policy)
- on-failure: restart the container when it exists due to an error (= exit code different than zero)
Containers - Docker Misc
Explain what is Docker Bench
Kubernetes
Kubernetes Exercises
Developer & "Regular" User Path
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
My First Pod | Pods | Exercise | Solution | |
"Killing" Containers | Pods | Exercise | Solution | |
Creating a Service | Service | Exercise | Solution | |
Creating a ReplicaSet | ReplicaSet | Exercise | Solution | |
Operating ReplicaSets | ReplicaSet | Exercise | Solution | |
ReplicaSets Selectors | ReplicaSet | Exercise | Solution |
Kubernetes Self Assesment
What is Kubernetes? Why organizations are using it?
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
To understand what Kubernetes is good for, let's look at some examples:
- You would like to run a certain application in a container on multiple different locations. Sure, if it's 2-3 servers/locations, you can do it by yourself but it can be challenging to scale it up to additional multiple location.
- Performing updates and changes across hundreds of containers
- Handle cases where the current load requires to scale up (or down)
When or why NOT to use Kubernetes?
-
If you are big team of engineers (e.g. 200) deploying applications using containers and you need to manage scaling, rolling out updates, etc. You probably want to use Kubernetes
-
If you manage low level infrastructure or baremetals, Kubernetes is probably not what you need or want
-
If you are a small team (e.g. 20-50 engineers) Kubernetes might be an overkill (even if you need scale, rolling out updates, etc.)
What Kubernetes objects are there?
- Pod
- Service
- ReplicationController
- ReplicaSet
- DaemonSet
- Namespace
- ConfigMap ...
What fields are mandatory with any Kubernetes object?
metadata, kind and apiVersion
What actions or operations you consider as best practices when it comes to Kuberentes?
- Always make sure Kubernetes YAML files are valid. Applying automated checks and pipelines is recommended.
- Always specify requests and limits to prevent situation where containers are using the entire cluster memory which may lead to OOM issue
What is kubectl?
Kubectl is the Kubernetes command line tool that allows you to run commands against Kubernetes clusters. For example, you can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.
What Kubernetes objects do you usually use when deploying applications in Kubernetes?
- Deployment - creates and the Pods and watches them
- Service: route traffic to Pods internally
- Ingress: route traffic from outside the cluster
Kubernetes - Cluster
What is a Kubernetes Cluster?
Red Hat Definition: "A Kubernetes cluster is a set of node machines for running containerized applications. If you’re running Kubernetes, you’re running a cluster. At a minimum, a cluster contains a worker node and a master node."
Read more here
What is a Node?
A node is a virtual or a physical machine that serves as a worker for running the applications.
It's recommended to have at least 3 nodes in a production environment.
What the master node is responsible for?
The master coordinates all the workflows in the cluster:
- Scheduling applications
- Managing desired state
- Rolling out new updates
Which command will list the nodes of the cluster?
kubectl get nodes
True or False? Every cluster must have 0 or more master nodes and at least 1 worker
False. A Kubernetes cluster consists of at least 1 master and can have 0 workers (although that wouldn't be very useful...)
What are the components of the master node?
- API Server - the Kubernetes API. All cluster components communicate through it
- Scheduler - assigns an application with a worker node it can run on
- Controller Manager - cluster maintenance (replications, node failures, etc.)
- etcd - stores cluster configuration
What are the components of a worker node?
- Kubelet - an agent responsible for node communication with the master.
- Kube-proxy - load balancing traffic between app components
- Container runtime - the engine runs the containers (Podman, Docker, ...)
You are managing multiple Kubernetes clusters. How do you quickly change between the clusters using kubectl?
kubectl config use-context
How do you prevent high memory usage in your Kubernetes cluster and possibly issues like memory leak and OOM?
Apply requests and limits, especially on third party applications (where the uncertainty is even bigger)
Do you have experience with deploying a Kubernetes cluster? If so, can you describe the process in high-level?
- Create multiple instances you will use as Kubernetes nodes/workers. Create also an instance to act as the Master. The instances can be provisioned in a cloud or they can be virtual machines on bare metal hosts.
- Provision a certificate authority that will be used to generate TLS certificates for the different components of a Kubernetes cluster (kubelet, etcd, ...)
- Generate a certificate and private key for the different components
- Generate kubeconfigs so the different clients of Kubernetes can locate the API servers and authenticate.
- Generate encryption key that will be used for encrypting the cluster data
- Create an etcd cluster
Kubernetes - Pods
Explain what is a Pod
A Pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.
Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
Deploy a pod called "my-pod" using the nginx:alpine image
kubectl run my-pod --image=nginx:alpine --restart=Never
If you are a Kubernetes beginner you should know that this is not a common way to run Pods. The common way is to run a Deployment which in turn runs Pod(s).
In addition, Pods and/or Deployments are usually defined in files rather than executed directly using only the CLI arguments.
What are your thoughts on "Pods are not meant to be created directly"?
Pods are usually indeed not created directly. You'll notice that Pods are usually created as part of another entities such as Deployments or ReplicaSets.
If a Pod dies, Kubernetes will not bring it back. This is why it's more useful for example to define ReplicaSets that will make sure that a given number of Pods will always run, even after a certain Pod dies.
How many containers can a pod contain?
A pod can include multiple containers but in most cases it would probably be one container per pod.
What use cases exist for running multiple containers in a single pod?
A web application with separate (= in their own containers) logging and monitoring components/adapters is one examples.
A CI/CD pipeline (using Tekton for example) can run multiple containers in one Pod if a Task contains multiple commands.
What are the possible Pod phases?
- Running - The Pod bound to a node and at least one container is running
- Failed - At least one container in the Pod terminated with a failure
- Succeeded - Every container in the Pod terminated with success
- Unknown - Pod's state could not be obtained
- Pending - Containers are not yet running (Perhaps images are still being downloaded or the pod wasn't scheduled yet)
True or False? By default, pods are isolated. This means they are unable to receive traffic from any source
False. By default, pods are non-isolated = pods accept traffic from any source.
True or False? The "Pending" phase means the Pod was not yet accepted by the Kubernetes cluster so the scheduler can't run it unless it's accepted
False. "Pending" is after the Pod was accepted by the cluster, but the container can't run for different reasons like images not yet downloaded.
How to list the pods in the current namespace?
kubectl get po
How view all the pods running in all the namespaces?
kubectl get pods --all-namespaces
True or False? A single Pod can be split across multiple nodes
False. A single Pod can run on a single node.
How to delete a pod?
kubectl delete pod pod_name
How to find out on which node a certain pod is running?
kubectl get po -o wide
What are "Static Pods"?
- Managed directly by Kubelet on specific node
- API server is not observing static Pods
- For each static Pod there is a mirror Pod on kubernetes API server but it can't be managed from there
Read more about it here
True or False? A volume defined in Pod can be accessed by all the containers of that Pod
True.
What happens when you run a Pod?
- Kubectl sends a request to the API server to create the Pod
- The Scheduler detects that there is an unassigned Pod (by monitoring the API server)
- The Scheduler chooses a node to assign the Pod to
- The Scheduler updates the API server about which node it chose
- Kubelet (which also monitors the API server) notices there is a Pod assigned to the same node on which it runs and that Pod isn't running
- Kubelet sends request to the container engine (e.g. Docker) to create and run the containers
- An update is sent by Kubelet to the API server (notifying it that the Pod is running)
How to confirm a container is running after running the command kubectl run web --image nginxinc/nginx-unprivileged
- When you run
kubectl describe pods <POD_NAME>
it will tell whether the container is running:Status: Running
- Run a command inside the container:
kubectl exec web -- ls
After running kubectl run database --image mongo
you see the status is "CrashLoopBackOff". What could possibly went wrong and what do you do to confirm?
"CrashLoopBackOff" means the Pod is starting, crashing, starting...and so it repeats itself.
There are many different reasons to get this error - lack of permissions, init-container misconfiguration, persistent volume connection issue, etc.
One of the ways to check why it happened it to run kubectl describe po <POD_NAME>
and having a look at the exit code
Last State: Terminated
Reason: Error
Exit Code: 100
Another way to check what's going on, is to run kubectl logs <POD_NAME>
. This will provide us with the logs from the containers running in that Pod.
Explain the purpose of the following lines
livenessProbe:
exec:
command:
- cat
- /appStatus
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
exec:
command:
- cat
- /appStatus
initialDelaySeconds: 10
periodSeconds: 5
These lines make use of liveness probe
. It's used to restart a container when it reaches a non-desired state.
In this case, if the command cat /appStatus
fails, Kubernetes will kill the container and will apply the restart policy. The initialDelaySeconds: 10
means that Kubelet will wait 10 seconds before running the command/probe for the first time. From that point on, it will run it every 5 seconds, as defined with periodSeconds
Explain the purpose of the following lines
readinessProbe:
tcpSocket:
port: 2017
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
tcpSocket:
port: 2017
initialDelaySeconds: 15
periodSeconds: 20
They define a readiness probe where the Pod will not be marked as "Ready" before it will be possible to connect to port 2017 of the container. The first check/probe will start after 15 seconds from the moment the container started to run and will continue to run the check/probe every 20 seconds until it will manage to connect to the defined port.
What does the "ErrImagePull" status of a Pod means?
It wasn't able to pull the image specified for running the container(s). This can happen if the client didn't authenticated for example.
More details can be obtained with kubectl describe po <POD_NAME>
.
What happens when you delete a Pod?
- The
TERM
signal is sent to kill the main processes inside the containers of the given Pod - Each container is given a period of 30 seconds to shut down the processes gracefully
- If the grace period expires, the
KILL
signal is used to kill the processes forcefully and the containers as well
Explain liveness probes
Liveness probes is a useful mechanism used for restarting the container when a certain check/probe, the user has defined, fails.
For example, the user can define that the command cat /app/status
will run every X seconds and the moment this command fails, the container will be restarted.
You can read more about it in kubernetes.io
Explain readiness probes
readiness probes used by Kubelet to know when a container is ready to start running, accepting traffic.
For example, a readiness probe can be to connect port 8080 on a container. Once Kubelet manages to connect it, the Pod is marked as ready
You can read more about it in kubernetes.io
How readiness probe status affect Services when they are combined?
Only containers whose state set to Success will be able to receive requests sent to the Service.
Why it's usually considered better to include one container per Pod?
One reason is that it makes it harder to scale, when you need to scale only one of the containers in a given Pod.
Kubernetes - Deployments
What is a "Deployment" in Kubernetes?
A Kubernetes Deployment is used to tell Kubernetes how to create or modify instances of the pods that hold a containerized application. Deployments can scale the number of replica pods, enable rollout of updated code in a controlled manner, or roll back to an earlier deployment version if necessary.
A Deployment is a declarative statement for the desired state for Pods and Replica Sets.
How to create a deployment?
cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
EOF
How to edit a deployment?
kubectl edit deployment some-deployment
What happens after you edit a deployment and change the image?
The pod will terminate and another, new pod, will be created.
Also, when looking at the replicaset, you'll see the old replica doesn't have any pods and a new replicaset is created.
How to delete a deployment?
One way is by specifying the deployment name: kubectl delete deployment [deployment_name]
Another way is using the deployment configuration file: kubectl delete -f deployment.yaml
What happens when you delete a deployment?
The pod related to the deployment will terminate and the replicaset will be removed.
How make an app accessible on private or external network?
Using a Service.
An internal load balancer in Kubernetes is called ____
and an external load balancer is called ____
An internal load balancer in Kubernetes is called Service and an external load balancer is Ingress
Kubernetes - Services
What is a Service in Kubernetes?
"An abstract way to expose an application running on a set of Pods as a network service." - read more here
In simpler words, it allows you to add an internal or external connectivity to a certain application running in a container.
True or False? The lifecycle of Pods and Services isn't connected so when a Pod dies, the Service still stays
True
How Service and Deployment are connected?
The truth is they aren't connected. Service points to Pod(s) directly, without connecting to the Deployment in any way.
What are important steps in defining/adding a Service?
- Making sure that targetPort of the Service is matching the containerPort of the POd
- Making sure that selector matches at least one of the Pod's labels
What is the default service type in Kubernetes and what is it used for?
The default is ClusterIP and it's used for exposing a port internally. It's useful when you want to enable internal communication between Pods and prevent any external access.
How to get information on a certain service?
kubctl describe service <SERVICE_NAME>
It's more common to use kubectl describe svc ...
What the following command does?
kubectl expose rs some-replicaset --name=replicaset-svc --target-port=2017 --type=NodePort
kubectl expose rs some-replicaset --name=replicaset-svc --target-port=2017 --type=NodePort
It exposes a ReplicaSet by creating a service called 'replicaset-svc'. The exposed port is 2017 (this is the port used by the application) and the service type is NodePort which means it will be reachable externally.
True or False? the target port, in the case of running the following command, will be exposed only on one of the Kubernetes cluster nodes but it will routed to all the pods
kubectl expose rs some-replicaset --name=replicaset-svc --target-port=2017 --type=NodePort
kubectl expose rs some-replicaset --name=replicaset-svc --target-port=2017 --type=NodePort
False. It will be exposed on every node of the cluster and will be routed to one of the Pods (which belong to the ReplicaSet)
How to verify that a certain service configured to forward the requests to a given pod
Run kubectl describe service
and see if the IPs from "Endpoints" match any IPs from the output of kubectl get pod -o wide
Explain what will happen when running apply on the following block
apiVersion: v1
kind: Service
metadata:
name: some-app
spec:
type: NodePort
ports:
- port: 8080
nodePort: 2017
protocol: TCP
selector:
type: backend
service: some-app
apiVersion: v1
kind: Service
metadata:
name: some-app
spec:
type: NodePort
ports:
- port: 8080
nodePort: 2017
protocol: TCP
selector:
type: backend
service: some-app
It creates a new Service of the type "NodePort" which means it can be used for internal and external communication with the app.
The port of the application is 8080 and the requests will forwarded to this port. The exposed port is 2017. As a note, this is not a common practice, to specify the nodePort.
The port used TCP (instead of UDP) and this is also the default so you don't have to specify it.
The selector used by the Service to know to which Pods to forward the requests. In this case, Pods with the label "type: backend" and "service: some-app".
How to turn the following service into an external one?
spec:
selector:
app: some-app
ports:
- protocol: TCP
port: 8081
targetPort: 8081
spec:
selector:
app: some-app
ports:
- protocol: TCP
port: 8081
targetPort: 8081
Adding type: LoadBalancer
and nodePort
spec:
selector:
app: some-app
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 32412
What would you use to route traffic from outside the Kubernetes cluster to services within a cluster?
Ingress
True or False? When "NodePort" is used, "ClusterIP" will be created automatically?
True
When would you use the "LoadBalancer" type
Mostly when you would like to combine it with cloud provider's load balancer
How would you map a service to an external address?
Using the 'ExternalName' directive.
Describe in detail what happens when you create a service
- Kubectl sends a request to the API server to create a Service
- The controller detects there is a new Service
- Endpoint objects created with the same name as the service, by the controller
- The controller is using the Service selector to identify the endpoints
- kube-proxy detects there is a new endpoint object + new service and adds iptables rules to capture traffic to the Service port and redirect it to endpoints
- kube-dns detects there is a new Service and adds the container record to the dns server
How to list the endpoints of a certain app?
kubectl get ep <name>
How can you find out information on a Service related to a certain Pod if all you can use is kubectl exec --
You can run kubectl exec <POD_NAME> -- env
which will give you a couple environment variables related to the Service.
Variables such as [SERVICE_NAME]_SERVICE_HOST
, [SERVICE_NAME]_SERVICE_PORT
, ...
Describe what happens when a container tries to connect with its corresponding Service for the first time. Explain who added each of the components you include in your description
- The container looks at the nameserver defined in /etc/resolv.conf
- The container queries the nameserver so the address is resolved to the Service IP
- Requests sent to the Service IP are forwarded with iptables rules (or other chosen software) to the endpoint(s).
Explanation as to who added them:
- The nameserver in the container is added by kubelet during the scheduling of the Pod, by using kube-dns
- The DNS record of the service is added by kube-dns during the Service creation
- iptables rules are added by kube-proxy during Endpoint and Service creation
Kubernetes - Ingress
What is Ingress?
From Kubernetes docs: "Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource."
Read more here
Complete the following configuration file to make it Ingress
metadata:
name: someapp-ingress
spec:
metadata:
name: someapp-ingress
spec:
There are several ways to answer this question.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: someapp-ingress
spec:
rules:
- host: my.host
http:
paths:
- backend:
serviceName: someapp-internal-service
servicePort: 8080
Explain the meaning of "http", "host" and "backend" directives
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: someapp-ingress
spec:
rules:
- host: my.host
http:
paths:
- backend:
serviceName: someapp-internal-service
servicePort: 8080
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: someapp-ingress
spec:
rules:
- host: my.host
http:
paths:
- backend:
serviceName: someapp-internal-service
servicePort: 8080
host is the entry point of the cluster so basically a valid domain address that maps to cluster's node IP address
the http line used for specifying that incoming requests will be forwarded to the internal service using http.
backend is referencing the internal service (serviceName is the name under metadata and servicePort is the port under the ports section).
Why using a wildcard in ingress host may lead to issues?
The reason you should not wildcard value in a host (like - host: *
) is because you basically tell your Kubernetes cluster to forward all the traffic to the container where you used this ingress. This may cause the entire cluster to go down.
What is Ingress Controller?
An implementation for Ingress. It's basically another pod (or set of pods) that does evaluates and processes Ingress rules and this it manages all the redirections.
There are multiple Ingress Controller implementations (the one from Kubernetes is Kubernetes Nginx Ingress Controller).
What are some use cases for using Ingress?
- Multiple sub-domains (multiple host entries, each with its own service)
- One domain with multiple services (multiple paths where each one is mapped to a different service/application)
How to list Ingress in your namespace?
kubectl get ingress
What is Ingress Default Backend?
It specifies what do with an incoming request to the Kubernetes cluster that isn't mapped to any backend (= no rule to for mapping the request to a service). If the default backend service isn't defined, it's recommended to define so users still see some kind of message instead of nothing or unclear error.
How to configure a default backend?
Create Service resource that specifies the name of the default backend as reflected in kubectl desrcibe ingress ...
and the port under the ports section.
How to configure TLS with Ingress?
Add tls and secretName entries.
spec:
tls:
- hosts:
- some_app.com
secretName: someapp-secret-tls
True or False? When configuring Ingress with TLS, the Secret component must be in the same namespace as the Ingress component
True
Which Kubernetes concept would you use to control traffic flow at the IP address or port level?
Network Policies
What the following block of lines does?
spec:
replicas: 2
selector:
matchLabels:
type: backend
template:
metadata:
labels:
type: backend
spec:
containers:
- name: httpd-yup
image: httpd
spec:
replicas: 2
selector:
matchLabels:
type: backend
template:
metadata:
labels:
type: backend
spec:
containers:
- name: httpd-yup
image: httpd
It defines a replicaset for Pods whose type is set to "backend" so at any given point of time there will be 2 concurrent Pods running.
Kubernetes - ReplicaSets
What is the purpose of ReplicaSet?
kubernetes.io: "A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods."
In simpler words, a ReplicaSet will ensure the specified number of Pods replicas is running for a selected Pod. If there are more Pods than defined in the ReplicaSet, some will be removed. If there are less than what is defined in the ReplicaSet then, then more replicas will be added.
What will happen when a Pod, created by ReplicaSet, is deleted directly with kubectl delete po ...
?
The ReplicaSet will create a new Pod in order to reach the desired number of replicas.
True or False? If a ReplicaSet defines 2 replicas but there 3 Pods running matching the ReplicaSet selector, it will do nothing
False. It will terminate one of the Pods to reach the desired state of 2 replicas.
Describe the sequence of events in case of creating a ReplicaSet
- The client (e.g. kubectl) sends a request to the API server to create a ReplicaSet
- The Controller detects there is a new event requesting for a ReplicaSet
- The controller creates new Pod definitions (the exact number depends on what is defined in the ReplicaSet definition)
- The scheduler detects unassigned Pods and decides to which nodes to assign the Pods. This information sent to the API server
- Kubelet detects that two Pods were assigned to the node it's running on (as it constantly watching the API server)
- Kubelet sends requests to the container engine, to create the containers that are part of the Pod
- Kubelet sends a request to the API server to notify it the Pods were created
How to list ReplicaSets in the current namespace?
kubectl get rs
Is it possible to delete ReplicaSet without deleting the Pods it created?
Yes, with --cascase=false
.
kubectl delete -f rs.yaml --cascade=false
What is the default number of replicas if not explicitly specified?
1
What the following output of kubectl get rs
means?
NAME DESIRED CURRENT READY AGE
web 2 2 0 2m23s
The replicaset web
has 2 replicas. It seems that the containers inside the Pod(s) are not yet running since the value of READY is 0. It might be normal since it takes time for some containers to start running and it might be due to an error. Running kubectl describe po POD_NAME
or kubectl logs POD_NAME
can give us more information.
You run kubectl get rs
and while DESIRED is set to 2, you see that READY is set to 0. What are some possible reasons for it to be 0?
- Images are still being pulled
- There is an error and the containers can't reach the state "Running"
True or False? Pods specified by the selector field of ReplicaSet must be created by the ReplicaSet itself
False. The Pods can be already running and initially they can be created by any object. It doesn't matter for the ReplicaSet and not a requirement for it to acquire and monitor them.
True or False? In case of a ReplicaSet, if Pods specified in the selector field don't exists, the ReplicaSet will wait for them to run before doing anything
False. It will take care of running the missing Pods.
In case of a ReplicaSet, Which field is mandatory in the spec section?
The field template
in spec section is mandatory. It's used by the ReplicaSet to create new Pods when needed.
You've created a ReplicaSet, how to check whether the ReplicaSet found matching Pods or it created new Pods?
kubectl describe rs <ReplicaSet Name>
It will be visible under Events
(the very last lines)
True or False? Deleting a ReplicaSet will delete the Pods it created
True (and not only the Pods but anything else it created).
True or False? Removing the label from a Pod that is used by ReplicaSet to match Pods, will cause the ReplicaSet to create a new Pod
True. When the label, used by a ReplicaSet in the selector field, removed from a Pod, that Pod no longer controlled by the ReplicaSet and the ReplicaSet will create a new Pod to compensate for the one it "lost".
How to scale a deployment to 8 replicas?
kubectl scale deploy <DEPLOYMENT_NAME> --replicas=8
ReplicaSets are running the moment the user executed the command to create them (like kubectl create -f rs.yaml
)
False. It can take some time, depends on what exactly you are running. To see if they are up and running, run kubectl get rs
and watch the 'READY' column.
How to expose a ReplicaSet as a new service?
kubectl expose rs <ReplicaSet Name> --name=<Service Name> --target-port=<Port to expose> --type=NodePort
Few notes:
- the target port depends on which port the app is using in the container
- type can be different and doesn't has to be specifically "NodePort"
Kubernetes - Storage
What is a volume in regards to Kubernetes?
A directory accessible by the containers inside a certain Pod. The mechanism responsible for creating the directory and managing it, ... is mainly depends on the volume type.
Which problems volumes in Kubernetes solve?
- Sharing files between containers running in the same Pod
- Storage in containers is ephemeral - it usually doesn't last for long. For example, when a container crashes, you lose all on-disk data.
Explain ephemeral volume types vs. persistent volumes in regards to Pods
Ephemeral volume types have the lifetime of a pod as opposed to persistent volumes which exist beyond the lifetime of a Pod.
Kubernetes - Network Policies
Explain Network Policies
kubernetes.io: "NetworkPolicies are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network "entities"..."
In simpler words, Network Policies specify how pods are allowed/disallowed to communicate with each other and/or other network endpoints.
What are some use cases for using Network Policies?
- Security: You want to prevent from everyone to communicate with a certain pod for security reasons
- Controling network traffic: You would like to deny network flow between two specific nodes
True or False? If no network policies are applied to a pod, then no connections to or from it are allowed
False. By default pods are non-isolated.
In case of two pods, if there is an egress policy on the source denining traffic and ingress policy on the destination that allows traffic then, traffic will be allowed or denied?
Denied. Both source and destination policies has to allow traffic for it to be allowed.
Kubernetes - Configuration File
Which parts a configuration file has?
It has three main parts:
- Metadata
- Specification
- Status (this automatically generated and added by Kubernetes)
What is the format of a configuration file?
YAML
How to get latest configuration of a deployment?
kubectl get deployment [deployment_name] -o yaml
Where Kubernetes cluster stores the cluster state?
etcd
Kubernetes - etcd
What is etcd?
True or False? Etcd holds the current status of any kubernetes component
True
True or False? The API server is the only component which communicates directly with etcd
True
True or False? application data is not stored in etcd
True
Why etcd? Why not some SQL or NoSQL database?
Kubernetes - Namespaces
What are namespaces?
Namespaces allow you split your cluster into virtual clusters where you can group your applications in a way that makes sense and is completely separated from the other groups (so you can for example create an app with the same name in two different namespaces)
Why to use namespaces? What is the problem with using one default namespace?
When using the default namespace alone, it becomes hard over time to get an overview of all the applications you manage in your cluster. Namespaces make it easier to organize the applications into groups that makes sense, like a namespace of all the monitoring applications and a namespace for all the security applications, etc.
Namespaces can also be useful for managing Blue/Green environments where each namespace can include a different version of an app and also share resources that are in other namespaces (namespaces like logging, monitoring, etc.).
Another use case for namespaces is one cluster, multiple teams. When multiple teams use the same cluster, they might end up stepping on each others toes. For example if they end up creating an app with the same name it means one of the teams overriden the app of the other team because there can't be too apps in Kubernetes with the same name (in the same namespace).
True or False? When a namespace is deleted all resources in that namespace are not deleted but moved to another default namespace
False. When a namespace is deleted, the resources in that namespace are deleted as well.
What special namespaces are there by default when creating a Kubernetes cluster?
- default
- kube-system
- kube-public
- kube-node-lease
What can you find in kube-system namespace?
- Master and Kubectl processes
- System processes
How to list all namespaces?
kubectl get namespaces
What kube-public contains?
- A configmap, which contains cluster information
- Publicely accessible data
How to get the name of the current namespace?
kubectl config view | grep namespace
What kube-node-lease contains?
It holds information on hearbeats of nodes. Each node gets an object which holds information about its availability.
How to create a namespace?
One way is by running kubectl create namespace [NAMESPACE_NAME]
Another way is by using namespace configuration file:
apiVersion: v1
kind: ConfigMap
metadata:
name: some-cofngimap
namespace: some-namespace
What default namespace contains?
Any resource you create while using Kubernetes.
True or False? With namespaces you can limit the resources consumed by the users/teams
True. With namespaces you can limit CPU, RAM and storage usage.
How to switch to another namespace? In other words how to change active namespace?
kubectl config set-context --current --namespace=some-namespace
and validate with kubectl config view --minify | grep namespace:
OR
kubens some-namespace
What is Resource Quota?
How to create a Resource Quota?
kubectl create quota some-quota --hard-cpu=2,pods=2
Which resources are accessible from different namespaces?
Service.
Let's say you have three namespaces: x, y and z. In x namespace you have a ConfigMap referencing service in z namespace. Can you reference the ConfigMap in x namespace from y namespace?
No, you would have to create separate namespace in y namespace.
Which service and in which namespace the following file is referencing?
apiVersion: v1
kind: ConfigMap
metadata:
name: some-configmap
data:
some_url: samurai.jack
apiVersion: v1
kind: ConfigMap
metadata:
name: some-configmap
data:
some_url: samurai.jack
It's referencing the service "samurai" in the namespace called "jack".
Which components can't be created within a namespace?
Volume and Node.
How to list all the components that bound to a namespace?
kubectl api-resources --namespaced=true
How to create components in a namespace?
One way is by specifying --namespace like this: kubectl apply -f my_component.yaml --namespace=some-namespace
Another way is by specifying it in the YAML itself:
apiVersion: v1
kind: ConfigMap
metadata:
name: some-configmap
namespace: some-namespace
and you can verify with: kubectl get configmap -n some-namespace
How to execute the command "ls" in an existing pod?
kubectl exec some-pod -it -- ls
How to create a service that exposes a deployment?
kubectl expose deploy some-deployment --port=80 --target-port=8080
How to create a pod and a service with one command?
kubectl run nginx --image=nginx --restart=Never --port 80 --expose
Describe in detail what the following command does kubectl create deployment kubernetes-httpd --image=httpd
Why to create kind deployment, if pods can be launched with replicaset?
How to get list of resources which are not in a namespace?
kubectl api-resources --namespaced=false
How to delete all pods whose status is not "Running"?
kubectl delete pods --field-selector=status.phase!='Running'
What kubectl logs [pod-name]
command does?
What kubectl describe pod [pod name] does?
command does?
How to display the resources usages of pods?
kubectl top pod
What kubectl get componentstatus
does?
Outputs the status of each of the control plane components.
What is Minikube?
Minikube is a lightweight Kubernetes implementation. It create a local virtual machine and deploys a simple (single node) cluster.
How do you monitor your Kubernetes?
You suspect one of the pods is having issues, what do you do?
Start by inspecting the pods status. we can use the command kubectl get pods
(--all-namespaces for pods in system namespace)
If we see "Error" status, we can keep debugging by running the command kubectl describe pod [name]
. In case we still don't see anything useful we can try stern for log tailing.
In case we find out there was a temporary issue with the pod or the system, we can try restarting the pod with the following kubectl scale deployment [name] --replicas=0
Setting the replicas to 0 will shut down the process. Now start it with kubectl scale deployment [name] --replicas=1
What the Kubernetes Scheduler does?
What happens to running pods if if you stop Kubelet on the worker nodes?
What happens what pods are using too much memory? (more than its limit)
They become candidates to for termination.
Describe how roll-back works
True or False? Memory is a compressible resource, meaning that when a container reach the memory limit, it will keep running
False. CPU is a compressible resource while memory is a non compressible resource - once a container reached the memory limit, it will be terminated.
Kubernetes - Operators
What is an Operator?
Explained here
"Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. Operators follow Kubernetes principles, notably the control loop."
Why do we need Operators?
The process of managing stateful applications in Kubernetes isn't as straightforward as managing stateless applications where reaching the desired status and upgrades are both handled the same way for every replica. In stateful applications, upgrading each replica might require different handling due to the stateful nature of the app, each replica might be in a different status. As a result, we often need a human operator to manage stateful applications. Kubernetes Operator is suppose to assist with this.
This also help with automating a standard process on multiple Kubernetes clusters
What components the Operator consists of?
- CRD (custom resource definition)
- Controller - Custom control loop which runs against the CRD
How Operator works?
It uses the control loop used by Kubernetes in general. It watches for changes in the application state. The difference is that is uses a custom control loop. In additions.
In addition, it also makes use of CRD's (Custom Resources Definitions) so basically it extends Kubernetes API.
True or False? Kubernetes Operator used for stateful applications
True
What is the Operator Framework?
open source toolkit used to manage k8s native applications, called operators, in an automated and efficient way.
What components the Operator Framework consists of?
- Operator SDK - allows developers to build operators
- Operator Lifecycle Manager - helps to install, update and generally manage the lifecycle of all operators
- Operator Metering - Enables usage reporting for operators that provide specialized services
Describe in detail what is the Operator Lifecycle Manager
It's part of the Operator Framework, used for managing the lifecycle of operators. It basically extends Kubernetes so a user can use a declarative way to manage operators (installation, upgrade, ...).
What openshift-operator-lifecycle-manager namespace includes?
It includes:
- catalog-operator - Resolving and installing ClusterServiceVersions the resource they specify.
- olm-operator - Deploys applications defined by ClusterServiceVersion resource
What is kubconfig? What do you use it for?
Can you use a Deployment for stateful applications?
Explain StatefulSet
Kubernetes - Secrets
Explain Kubernetes Secrets
Secrets let you store and manage sensitive information (passwords, ssh keys, etc.)
How to create a Secret from a key and value?
kubectl create secret generic some-secret --from-literal=password='donttellmypassword'
How to create a Secret from a file?
kubectl create secret generic some-secret --from-file=/some/file.txt
What type: Opaque
in a secret file means? What other types are there?
Opaque is the default type used for key-value pairs.
True or False? storing data in a Secret component makes it automatically secured
False. Some known security mechanisms like "encryption" aren't enabled by default.
What is the problem with the following Secret file:
apiVersion: v1
kind: Secret
metadata:
name: some-secret
type: Opaque
data:
password: mySecretPassword
apiVersion: v1
kind: Secret
metadata:
name: some-secret
type: Opaque
data:
password: mySecretPassword
Password isn't encrypted. You should run something like this: `echo -n 'mySecretPassword' | base64` and paste the result to the file instead of using plain-text.
How to create a Secret from a configuration file?
kubectl apply -f some-secret.yaml
What the following in Deployment configuration file means?
spec:
containers:
- name: USER_PASSWORD
valueFrom:
secretKeyRef:
name: some-secret
key: password
spec:
containers:
- name: USER_PASSWORD
valueFrom:
secretKeyRef:
name: some-secret
key: password
USER_PASSWORD environment variable will store the value from password key in the secret called "some-secret" In other words, you reference a value from a Kubernetes Secret.
Kubernetes - Volumes
True or False? Kubernetes provides data persistence out of the box, so when you restart a pod, data is saved
False
Explain "Persistent Volumes". Why do we need it?
Persistent Volumes allow us to save data so basically they provide storage that doesn't depend on the pod lifecycle.
True or False? Persistent Volume must be available to all nodes because the pod can restart on any of them
True
What types of persistent volumes are there?
- NFS
- iSCSI
- CephFS
- ...
What is PersistentVolumeClaim?
Explain Volume Snapshots
True or False? Kubernetes manages data persistence
False
Explain Storage Classes
Explain "Dynamic Provisioning" and "Static Provisioning"
Explain Access Modes
What is CSI Volume Cloning?
Explain "Ephemeral Volumes"
What types of ephemeral volumes Kubernetes supports?
What is Reclaim Policy?
What reclaim policies are there?
- Retain
- Recycle
- Delete
Kubernetes - Access Control
What is RBAC?
Explain the Role
and RoleBinding"
objects
What is the difference between Role
and ClusterRole
objects?
Explain what are "Service Accounts" and in which scenario would use create/use one
Kubernetes.io: "A service account provides an identity for processes that run in a Pod."
An example of when to use one: You define a pipeline that needs to build and push an image. In order to have sufficient permissions to build an push an image, that pipeline would require a service account with sufficient permissions.
What happens you create a pod and you DON'T specify a service account?
The pod is automatically assigned with the default service account (in the namespace where the pod is running).
Explain how Service Accounts are different from User Accounts
- User accounts are global while Service accounts unique per namespace
- User accounts are meant for humans or client processes while Service accounts are for processes which run in pods
How to list Service Accounts?
kubectl get serviceaccounts
Explain "Security Context"
kubernetes.io: "A security context defines privilege and access control settings for a Pod or Container."
Kubernetes - Patterns
Explain the sidecar container pattern
Kubernetes - CronJob
Explain what is CronJob and what is it used for
What possible issue can arise from using the following spec and how to fix it?
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-cron-job
spec:
schedule: '*/1 * * * *'
startingDeadlineSeconds: 10
concurrencyPolicy: Allow
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-cron-job
spec:
schedule: '*/1 * * * *'
startingDeadlineSeconds: 10
concurrencyPolicy: Allow
If the cron job fails, the next job will not replace the previous one due to the "concurrencyPolicy" value which is "Allow". It will keep spawning new jobs and so eventually the system will be filled with failed cron jobs. To avoid such problem, the "concurrencyPolicy" value should be either "Replace" or "Forbid".
What issue might arise from using the following CronJob and how to fix it?
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: "some-cron-job"
spec:
schedule: '*/1 * * * *'
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: "some-cron-job"
spec:
schedule: '*/1 * * * *'
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
The following lines placed under the template:
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
As a result this configuration isn't part of the cron job spec hence the cron job has no limits which can cause issues like OOM and potentially lead to API server being down.
To fix it, these lines should placed in the spec of the cron job, above or under the "schedule" directive in the above example.
Kubernetes - Misc
Explain Imperative Management vs. Declarative Management
Explain what Kubernetes Service Discovery means
You have one Kubernetes cluster and multiple teams that would like to use it. You would like to limit the resources each team consumes in the cluster. Which Kubernetes concept would you use for that?
Namespaces will allow to limit resources and also make sure there are no collisions between teams when working in the cluster (like creating an app with the same name).
What Kube Proxy does?
What "Resources Quotas" are used for and how?
Explain ConfigMap
Separate configuration from pods. It's good for cases where you might need to change configuration at some point but you don't want to restart the application or rebuild the image so you create a ConfigMap and connect it to a pod but externally to the pod.
Overall it's good for:
- Sharing the same configuration between different pods
- Storing external to the pod configuration
How to use ConfigMaps?
- Create it (from key&value, a file or an env file)
- Attach it. Mount a configmap as a volume
True or False? Sensitive data, like credentials, should be stored in a ConfigMap
False. Use secret.
Explain "Horizontal Pod Autoscaler"
Scale the number of pods automatically on observed CPU utilization.
When you delete a pod, is it deleted instantly? (a moment after running the command)
What does being cloud-native mean?
Explain the pet and cattle approach of infrastructure with respect to kubernetes
Describe how you one proceeds to run a containerised web app in K8s, which should be reachable from a public URL.
How would you troubleshoot your cluster if some applications are not reachable any more?
Describe what CustomResourceDefinitions there are in the Kubernetes world? What they can be used for?
How does scheduling work in kubernetes?
The control plane component kube-scheduler asks the following questions,
- What to schedule? It tries to understand the pod-definition specifications
- Which node to schedule? It tries to determine the best node with available resources to spin a pod
- Binds the Pod to a given node
View more here
How are labels and selectors used?
What QoS classes are there?
- Guaranteed
- Burstable
- BestEffort
Explain Labels. What are they and why would one use them?
Explain Selectors
What is Kubeconfig?
Kubernetes - Gatekeeper
What is Gatekeeper?
Gatekeeper docs: "Gatekeeper is a validating (mutating TBA) webhook that enforces CRD-based policies executed by Open Policy Agent"
Explain how Gatekeeper works
On every request sent to the Kubernetes cluster, Gatekeeper sends the policies and the resources to OPA (Open Policy Agent) to check if it violates any policy. If it does, Gatekeeper will return the policy error message back. If it isn't violates any policy, the request will reach the cluster.
Kubernetes - Policy Testing
What is Conftest?
Conftest allows you to write tests against structured files. You can think of it as tests library for Kubernetes resources.
It is mostly used in testing environments such as CI pipelines or local hooks.
What is Datree? How is it different from Conftest?
Same as Conftest, it is used for policy testing and enforcement. The difference is that it comes with built-in policies.
Kubernetes - Helm
What is Helm?
Package manager for Kubernetes. Basically the ability to package YAML files and distribute them to other users and apply them in different clusters.
Why do we need Helm? What would be the use case for using it?
Sometimes when you would like to deploy a certain application to your cluster, you need to create multiple YAML files/components like: Secret, Service, ConfigMap, etc. This can be tedious task. So it would make sense to ease the process by introducing something that will allow us to share these bundle of YAMLs every time we would like to add an application to our cluster. This something is called Helm.
A common scenario is having multiple Kubernetes clusters (prod, dev, staging). Instead of individually applying different YAMLs in each cluster, it makes more sense to create one Chart and install it in every cluster.
Explain "Helm Charts"
Helm Charts is a bundle of YAML files. A bundle that you can consume from repositories or create your own and publish it to the repositories.
It is said that Helm is also Templating Engine. What does it mean?
It is useful for scenarios where you have multiple applications and all are similar, so there are minor differences in their configuration files and most values are the same. With Helm you can define a common blueprint for all of them and the values that are not fixed and change can be placeholders. This is called a template file and it looks similar to the following
apiVersion: v1
kind: Pod
metadata:
name: {[ .Values.name ]}
spec:
containers:
- name: {{ .Values.container.name }}
image: {{ .Values.container.image }}
port: {{ .Values.container.port }}
The values themselves will in separate file:
name: some-app
container:
name: some-app-container
image: some-app-image
port: 1991
What are some use cases for using Helm template file?
- Deploy the same application across multiple different environments
- CI/CD
Explain the Helm Chart Directory Structure
someChart/ -> the name of the chart Chart.yaml -> meta information on the chart values.yaml -> values for template files charts/ -> chart dependencies templates/ -> templates files :)
How do you search for charts?
helm search hub [some_keyword]
Is it possible to override values in values.yaml file when installing a chart?
Yes. You can pass another values file: `helm install --values=override-values.yaml [CHART_NAME]`
Or directly on the command line: helm install --set some_key=some_value
How Helm supports release management?
Helm allows you to upgrade, remove and rollback to previous versions of charts. In version 2 of Helm it was with what is known as "Tiller". In version 3, it was removed due to security concerns.
Kubernetes - Security
What security best practices do you follow in regards to the Kubernetes cluster?
- Secure inter-service communication (one way is to use Istio to provide mutual TLS)
- Isolate different resources into separate namespaces based on some logical groups
- Use supported container runtime (if you use Docker then drop it because it's deprecated. You might want to CRI-O as an engine and podman for CLI)
- Test properly changes to the cluster (e.g. consider using Datree to prevent kubernetes misconfigurations)
- Limit who can do what (by using for example OPA gatekeeper) in the cluster
- Use NetworkPolicy to apply network security
- Consider using tools (e.g. Falco) for monitoring threats
Kubernetes - Troubleshooting Scenarios
Running kubectl get pods
you see Pods in "Pending" status. What would you do?
One possible path is to run kubectl describe pod <pod name>
to get more details.
You might see one of the following:
- Cluster is full. In this case, extend the cluster.
- ResourcesQuota limits are met. In this case you might want to modify them
- Check if PersistentVolumeClaim mount is pending
If none of the above helped, run the command (get pods
) with -o wide
to see if the node is assigned to a node. If not, there might be an issue with scheduler.
Users unable to reach an application running on a Pod on Kubernetes. What might be the issue and how to check?
One possible path is to start with checking the Pod status.
- Is the Pod pending? if yes, check for the reason with
kubectl describe pod <pod name>
TODO: finish this...
Kubernetes - Submariner
Explain what is Submariner and what is it used for
"Submariner enables direct networking between pods and services in different Kubernetes clusters, either on premise or in the cloud."
You can learn more here
What each of the following components does?:
- Lighthouse
- Broker
- Gateway Engine
- Route Agent
Kubernetes - Istio
What is Istio? What is it used for?
Programming
What programming language do you prefer to use for DevOps related tasks? Why specifically this one?
What are static typed (or simply typed) languages?
In static typed languages the variable type is known at compile-time instead of at run-time. Such languages are: C, C++ and Java
Explain expressions and statements
An expression is anything that results in a value (even if the value is None). Basically, any sequence of literals so, you can say that a string, integer, list, ... are all expressions.
Statements are instructions executed by the interpreter like variable assignments, for loops and conditionals (if-else).
What is Object Oriented Programming? Why is it important?
Explain Composition
What is a compiler?
What is an interpreter?
Are you familiar with SOLID design principles?
SOLID design principles are about:
- Make it easier to extend the functionality of the system
- Make the code more readable and easier to maintain
SOLID is:
- Single Responsibility - A class should only have a single responsibility
- Open-Closed - An entity should be open for extension, but closed for modification. What this practically means is that you should extend functionality by adding a new code and not by modifying it. Your system should be separated into components so it can be easily extended without breaking everything.
- Liskov Substitution - Any derived class should be able to substitute the its parent without altering its corrections. Practically, every part of the code will get the expected result no matter which part is using it
- Interface segregation - A client should never depend on anything it doesn't uses
- Dependency Inversion - High level modules should depend on abstractions, not low level modules
What is YAGNI? What is your opinion on it?
What is DRY? What is your opinion on it?
What are the four pillars of object oriented programming?
Explain recursion
Explain Inversion of Control
Explain Dependency Injection
True or False? In Dynamically typed languages the variable type is known at run-time instead of at compile-time
True
Explain what are design patterns and describe three of them in detail
Explain big O notation
What is "Duck Typing"?
Explain string interpolation
Common algorithms
Binary search:
- How does it works?
- Can you implement it? (in any language you prefer)
- What is the average performance of the algorithm you wrote?
It's a search algorithm used with sorted arrays/lists to find a target value by dividing the array each iteration and comparing the middle value to the target value. If the middle value is smaller than target value, then the target value is searched in the right part of the divided array, else in the left side. This continues until the value is found (or the array divided max times)
The average performance of the above algorithm is O(log n). Best performance can be O(1) and worst O(log n).
Code Review
What are your code-review best practices?
Do you agree/disagree with each of the following statements and why?:
- The commit message is not important. When reviewing a change/patch one should focus on the actual change
- You shouldn't test your code before submitting it. This is what CI/CD exists for.
Strings
In any language you want, write a function to determine if a given string is a palindrome
In any language you want, write a function to determine if two strings are Anagrams
Integers
In any language you would like, print the numbers from 1 to a given integer. For example for input: 5, the output is: 12345
Time Complexity
Describe what would be the time complexity of the operations access
, search
insert
and remove
for the following data structures:
- Stack
- Queue
- Linked List
- Binary Search Tree
What is the complexity for the best, worst and average cases of each of the following algorithms?:
- Quick sort
- Merge sort
- Bucket Sort
- Radix Sort
Data Structures & Types
Implement Stack in any language you would like
Tell me everything you know about Linked Lists
- A linked list is a data structure
- It consists of a collection of nodes. Together these nodes represent a sequence
- Useful for use cases where you need to insert or remove an element from any position of the linked list
- Some programming languages don't have linked lists as a built-in data type (like Python for example) but it can be easily implemented
Describe (no need to implement) how to detect a loop in a Linked List
There are multiple ways to detect a loop in a linked list. I'll mention three here:
Worst solution:
Two pointers where one points to the head and one points to the last node. Each time you advance the last pointer by one and check whether the distance between head pointer to the moved pointer is bigger than the last time you measured the same distance (if not, you have a loop).
The reason it's probably the worst solution, is because time complexity here is O(n^2)
Decent solution:
Create an hash table and start traversing the linked list. Every time you move, check whether the node you moved to is in the hash table. If it isn't, insert it to the hash table. If you do find at any point the node in the hash table, it means you have a loop. When you reach None/Null, it's the end and you can return "no loop" value. This one is very easy to implement (just create a hash table, update it and check whether the node is in the hash table every time you move to the next node) but since the auxiliary space is O(n) because you create a hash table then, it's not the best solution
Good solution:
Instead of creating a hash table to document which nodes in the linked list you have visited, as in the previous solution, you can modify the Linked List (or the Node to be precise) to have a "visited" attribute. Every time you visit a node, you set "visited" to True.
Time compleixty is O(n) and Auxiliary space is O(1), so it's a good solution but the only problem, is that you have to modify the Linked List.
Best solution:
You set two pointers to traverse the linked list from the beginning. You move one pointer by one each time and the other pointer by two. If at any point they meet, you have a loop. This solution is also called "Floyd's Cycle-Finding"
Time complexity is O(n) and auxiliary space is O(1). Perfect :)
Implement Hash table in any language you would like
What is Integer Overflow? How is it handled?
Name 3 design patterns. Do you know how to implement (= provide an example) these design pattern in any language you'll choose?
Given an array/list of integers, find 3 integers which are adding up to 0 (in any language you would like)
def find_triplets_sum_to_zero(li):
li = sorted(li)
for i, val in enumerate(li):
low, up = 0, len(li)-1
while low < i < up:
tmp = var + li[low] + li[up]
if tmp > 0:
up -= 1
elif tmp < 0:
low += 1
else:
yield li[low], val, li[up]
low += 1
up -= 1
Python
Python Exercises
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
Identify the data type | Data Types | Exercise | Solution | |
Identify the data type - Advanced | Data Types | Exercise | Solution | |
Reverse String | Strings | Exercise | Solution | |
Compress String | Strings | Exercise | Solution |
Python Self Assessment
What are some characteristics of the Python programming language?
1. It is a high level general purpose programming language created in 1991 by Guido Van Rosum.
2. The language is interpreted, being the CPython (Written in C) the most used/maintained implementation.
3. It is strongly typed. The typing discipline is duck typing and gradual.
4. Python focuses on readability and makes use of whitespaces/identation instead of brackets { }
5. The python package manager is called PIP "pip installs packages", having more than 200.000 available packages.
6. Python comes with pip installed and a big standard library that offers the programmer many precooked solutions.
7. In python **Everything** is an object.
What built-in types Python has?
List
Dictionary
Set
Numbers (int, float, ...)
String
Bool
Tuple
Frozenset
What is mutability? Which of the built-in types in Python are mutable?
Mutability determines whether you can modify an object of specific type.
The mutable data types are:
List
Dictionary
Set
The immutable data types are:
Numbers (int, float, ...)
String
Bool
Tuple
Frozenset
Python - Booleans
What is the result of each of the following?
- 1 > 2
- 'b' > 'a'
- 1 == 'one'
- 2 > 'one'
- False
- True
- False
- TypeError
What is the result of `bool("")`? What about `bool(" ")`? Explain
bool("") -> evaluates to False
bool(" ") -> evaluates to True
What is the result of running [] is not []
? explain the result
It evaluates to True.
The reason is that the two created empty list are different objects. x is y
only evaluates to true when x and y are the same object.
What is the result of running True-True
?
0
Python - Strings
True or False? String is an immutable data type in Python
True
How to check if a string starts with a letter?
Regex:
import re
if re.match("^[a-zA-Z]+.*", string):
string built-in:
if string and string[0].isalpha():
How to check if all characters in a given string are digits?
string.isdigit
How to remove trailing slash ('/') from a string?
string.rstrip('/')
What is the result of of each of the following?
- "abc"*3
- "abc"*2.5
- "abc"*2.0
- "abc"*True
- "abc"*False
- abcabcabc
- TypeError
- TypeError
- "abc"
- ""
Improve the following code:
char = input("Insert a character: ")
if char == "a" or char == "o" or char == "e" or char =="u" or char == "i":
print("It's a vowel!")
char = input("Insert a character: ")
if char == "a" or char == "o" or char == "e" or char =="u" or char == "i":
print("It's a vowel!")
char = input("Insert a character: ") # For readablity
if lower(char[0]) in "aieou": # Takes care of multiple characters and separate cases
print("It's a vowel!")
OR
if lower(input("Insert a character: ")[0]) in "aieou": # Takes care of multiple characters and small/Capital cases
print("It's a vowel!")
Python - Functions
How to define a function with Python?
Using the `def` keyword. For Examples:
def sum(a, b):
return (a + b)
In Python, functions are first-class objects. What does it mean?
In general, first class objects in programming languages are objects which can be assigned to variable, used as a return value and can be used as arguments or parameters.
In python you can treat functions this way. Let's say we have the following function
def my_function():
return 5
You can then assign a function to a variables like this x = my_function
or you can return functions as return values like this return my_function
Python - Integer
Write a function to determine if a number is a Palindrome
Python - OOP
Explain inheritance and how to use it in Python
By definition inheritance is the mechanism where an object acts as a base of another object, retaining all its
properties.
So if Class B inherits from Class A, every characteristics from class A will be also available in class B.
Class A would be the 'Base class' and B class would be the 'derived class'.
This comes handy when you have several classes that share the same functionalities.
The basic syntax is:
class Base: pass
class Derived(Base): pass
A more forged example:
class Animal:
def __init__(self):
print("and I'm alive!")
def eat(self, food):
print("ñom ñom ñom", food)
class Human(Animal):
def __init__(self, name):
print('My name is ', name)
super().__init__()
def write_poem(self):
print('Foo bar bar foo foo bar!')
class Dog(Animal):
def __init__(self, name):
print('My name is', name)
super().__init__()
def bark(self):
print('woof woof')
michael = Human('Michael')
michael.eat('Spam')
michael.write_poem()
bruno = Dog('Bruno')
bruno.eat('bone')
bruno.bark()
>>> My name is Michael
>>> and I'm alive!
>>> ñom ñom ñom Spam
>>> Foo bar bar foo foo bar!
>>> My name is Bruno
>>> and I'm alive!
>>> ñom ñom ñom bone
>>> woof woof
Calling super() calls the Base method, thus, calling super().__init__() we called the Animal __init__.
There is a more advanced python feature called MetaClasses that aid the programmer to directly control class creation.
Explain and demonstrate class attributes & instance attributes
In the following block of code x
is a class attribute while self.y
is a instance attribute
class MyClass(object):
x = 1
def __init__(self, y):
self.y = y
Python - Exceptions
What is an error? What is an exception? What types of exceptions are you familiar with?
# Note that you generally don't need to know the compiling process but knowing where everything comes from
# and giving complete answers shows that you truly know what you are talking about.
Generally, every compiling process have a two steps.
- Analysis
- Code Generation.
Analysis can be broken into:
1. Lexical analysis (Tokenizes source code)
2. Syntactic analysis (Check whether the tokens are legal or not, tldr, if syntax is correct)
for i in 'foo'
^
SyntaxError: invalid syntax
We missed ':'
3. Semantic analysis (Contextual analysis, legal syntax can still trigger errors, did you try to divide by 0,
hash a mutable object or use an undeclared function?)
1/0
ZeroDivisionError: division by zero
These three analysis steps are the responsible for error handlings.
The second step would be responsible for errors, mostly syntax errors, the most common error.
The third step would be responsible for Exceptions.
As we have seen, Exceptions are semantic errors, there are many builtin Exceptions:
ImportError
ValueError
KeyError
FileNotFoundError
IndentationError
IndexError
...
You can also have user defined Exceptions that have to inherit from the `Exception` class, directly or indirectly.
Basic example:
class DividedBy2Error(Exception):
def __init__(self, message):
self.message = message
def division(dividend,divisor):
if divisor == 2:
raise DividedBy2Error('I dont want you to divide by 2!')
return dividend / divisor
division(100, 2)
>>> __main__.DividedBy2Error: I dont want you to divide by 2!
Explain Exception Handling and how to use it in Python
Exceptions: Errors detected during execution are called Exceptions.
Handling Exception: When an error occurs, or exception as we call it, Python will normally stop and generate an error message.
Exceptions can be handled using try
and except
statement in python.
Example: Following example asks the user for input until a valid integer has been entered.
If user enter a non-integer value it will raise exception and using except it will catch that exception and ask the user to enter valid integer again.
while True:
try:
a = int(input("please enter an integer value: "))
break
except ValueError:
print("Ops! Please enter a valid integer value.")
For more details about errors and exceptions follow this https://docs.python.org/3/tutorial/errors.html
What is the result of running the following function?
def true_or_false():
try:
return True
finally:
return False
def true_or_false():
try:
return True
finally:
return False
False
Python Built-in functions
Explain the following built-in functions (their purpose + use case example):
- repr
- any
- all
What is the difference between repr function and str?
What is the __call__ method?
Do classes has the __call__ method as well? What for?
What _ is used for in Python?
- Translation lookup in i18n
- Hold the result of the last executed expression or statement in the interactive interpreter.
- As a general purpose "throwaway" variable name. For example: x, y, _ = get_data() (x and y are used but since we don't care about third variable, we "threw it away").
Explain what is GIL
What is Lambda? How is it used?
A lambda
expression is an 'anonymous' function, the difference from a normal defined function using the keyword `def`` is the syntax and usage.
The syntax is:
lambda[parameters]: [expresion]
Examples:
- A lambda function add 10 with any argument passed.
x = lambda a: a + 10
print(x(10))
- An addition function
addition = lambda x, y: x + y
print(addition(10, 20))
- Squaring function
square = lambda x : x ** 2
print(square(5))
Generally it is considered a bad practice under PEP 8 to assign a lambda expresion, they are meant to be used as parameters and inside of other defined functions.
Properties
Are there private variables in Python? How would you make an attribute of a class, private?
Explain the following:
- getter
- setter
- deleter
Explain what is @property
How do you swap values between two variables?
x, y = y, x
Explain the following object's magic variables:
- dict
Write a function to return the sum of one or more numbers. The user will decide how many numbers to use
First you ask the user for the amount of numbers that will be use. Use a while loop that runs until amount_of_numbers becomes 0 through subtracting amount_of_numbers by one each loop. In the while loop you want ask the user for a number which will be added a variable each time the loop runs.
def return_sum():
amount_of_numbers = int(input("How many numbers? "))
total_sum = 0
while amount_of_numbers != 0:
num = int(input("Input a number. "))
total_sum += num
amount_of_numbers -= 1
return total_sum
Print the average of [2, 5, 6]. It should be rounded to 3 decimal places
li = [2, 5, 6]
print("{0:.3f}".format(sum(li)/len(li)))
Python - Lists
What is a tuple in Python? What is it used for?
A tuple is a built-in data type in Python. It's used for storing multiple items in a single variable.
List, like a tuple, is also used for storing multiple items. What is then, the difference between a tuple and a list?
List, as opposed to a tuple, is a mutable data type. It means we can modify it and at items to it.
How to add the number 2 to the list x = [1, 2, 3]
x.append(2)
How to check how many items a list contains?
len(sone_list)
How to get the last element of a list?
some_list[-1]
How to add the items of [1, 2, 3] to the list [4, 5, 6]?
x = [4, 5, 6] x.extend([1, 2, 3])
Don't use append
unless you would like the list as one item.
How to remove the first 3 items from a list?
my_list[0:3] = []
How do you get the maximum and minimum values from a list?
Maximum: max(some_list)
Minimum: min(some_list)
How to get the top/biggest 3 items from a list?
sorted(some_list, reverse=True)[:3]
Or
some_list.sort(reverse=True)
some_list[:3]
How to insert an item to the beginning of a list? What about two items?
How to sort list by the length of items?
sorted_li = sorted(li, key=len)
Or without creating a new list:
li.sort(key=len)
Do you know what is the difference between list.sort() and sorted(list)?
-
sorted(list) will return a new list (original list doesn't change)
-
list.sort() will return None but the list is change in-place
-
sorted() works on any iterable (Dictionaries, Strings, ...)
-
list.sort() is faster than sorted(list) in case of Lists
Convert every string to an integer: [['1', '2', '3'], ['4', '5', '6']]
nested_li = [['1', '2', '3'], ['4', '5', '6']]
[[int(x) for x in li] for li in nested_li]
How to merge two sorted lists into one sorted list?
sorted(li1 + li2)
Another way:
i, j = 0
merged_li = []
while i < len(li1) and j < len(li2):
if li1[i] < li2[j]:
merged_li.append(li1[i])
i += 1
else:
merged_li.append(li2[j])
j += 1
merged_li = merged_li + merged_li[i:] + merged_li[j:]
How to check if all the elements in a given lists are unique? so [1, 2, 3] is unique but [1, 1, 2, 3] is not unique because 1 exists twice
There are many ways of solving this problem:
# Note: :list and -> bool are just python typings, they are not needed for the correct execution of the algorithm.
Taking advantage of sets and len:
def is_unique(l:list) -> bool:
return len(set(l)) == len(l)
This one is can be seen used in other programming languages.
def is_unique2(l:list) -> bool:
seen = []
for i in l:
if i in seen:
return False
seen.append(i)
return True
Here we just count and make sure every element is just repeated once.
def is_unique3(l:list) -> bool:
for i in l:
if l.count(i) > 1:
return False
return True
This one might look more convulated but hey, one liners.
def is_unique4(l:list) -> bool:
return all(map(lambda x: l.count(x) < 2, l))
You have the following function
def my_func(li = []):
li.append("hmm")
print(li)
If we call it 3 times, what would be the result each call?
def my_func(li = []):
li.append("hmm")
print(li)
['hmm']
['hmm', 'hmm']
['hmm', 'hmm', 'hmm']
How to iterate over a list?
for item in some_list:
print(item)
How to iterate over a list with indexes?
for i, item in enumerate(some_list):
print(i)
How to start list iteration from 2nd index?
Using range like this
for i in range(1, len(some_list)):
some_list[i]
Another way is using slicing
for i in some_list[1:]:
How to iterate over a list in reverse order?
Method 1
for i in reversed(li):
...
Method 2
n = len(li) - 1
while n > 0:
...
n -= 1
Sort a list of lists by the second item of each nested list
li = [[1, 4], [2, 1], [3, 9], [4, 2], [4, 5]]
sorted(li, key=lambda l: l[1])
or
li.sort(key=lambda l: l[1)
Combine [1, 2, 3] and ['x', 'y', 'z'] so the result is [(1, 'x'), (2, 'y'), (3, 'z')]
nums = [1, 2, 3]
letters = ['x', 'y', 'z']
list(zip(nums, letters))
What is List Comprehension? Is it better than a typical loop? Why? Can you demonstrate how to use it?
You have the following list: [{'name': 'Mario', 'food': ['mushrooms', 'goombas']}, {'name': 'Luigi', 'food': ['mushrooms', 'turtles']}]
Extract all type of foods. Final output should be: {'mushrooms', 'goombas', 'turtles'}
brothers_menu = \
[{'name': 'Mario', 'food': ['mushrooms', 'goombas']}, {'name': 'Luigi', 'food': ['mushrooms', 'turtles']}]
# "Classic" Way
def get_food(brothers_menu) -> set:
temp = []
for brother in brothers_menu:
for food in brother['food']:
temp.append(food)
return set(temp)
# One liner way (Using list comprehension)
set([food for bro in x for food in bro['food']])
Python - Dictionaries
How to create a dictionary?
my_dict = dict(x=1, y=2) OR my_dict = {'x': 1, 'y': 2} OR my_dict = dict([('x', 1), ('y', 2)])
How to remove a key from a dictionary?
del my_dict['some_key']
you can also use my_dict.pop('some_key')
which returns the value of the key.
How to sort a dictionary by values?
{k: v for k, v in sorted(x.items(), key=lambda item: item[1])}
How to sort a dictionary by keys?
dict(sorted(some_dictionary.items()))
How to merge two dictionaries?
some_dict1.update(some_dict2)
Convert the string "a.b.c" to the dictionary {'a': {'b': {'c': 1}}}
output = {}
string = "a.b.c"
path = string.split('.')
target = reduce(lambda d, k: d.setdefault(k, {}), path[:-1], output)
target[path[-1]] = 1
print(output)
Common Algorithms Implementation
Python Files
How to write to a file?
with open('file.txt', 'w') as file:
file.write("My insightful comment")
How to print the 12th line of a file?
How to reverse a file?
Sum all the integers in a given file
Print a random line of a given file
Print every 3rd line of a given file
Print the number of lines in a given file
Print the number of of words in a given file
Can you write a function which will print all the file in a given directory? including sub-directories
Write a dictionary (variable) to a file
import json
with open('file.json', 'w') as f:
f.write(json.dumps(dict_var))
Python OS
How to print current working directory?
import os
print(os.getcwd())
Given the path /dir1/dir2/file1
print the file name (file1)
import os
print(os.path.basename('/dir1/dir2/file1'))
# Another way
print(os.path.split('/dir1/dir2/file1')[1])
Given the path /dir1/dir2/file1
- Print the path without the file name (/dir1/dir2)
- Print the name of the directory where the file resides (dir2)
import os
## Part 1.
# os.path.dirname gives path removing the end component
dirpath = os.path.dirname('/dir1/dir2/file1')
print(dirpath)
## Part 2.
print(os.path.basename(dirpath))
How do you execute shell commands using Python?
How do you join path components? for example /home
and luig
will result in /home/luigi
How do you remove non-empty directory?
Python Regex
How do you perform regular expressions related operations in Python? (match patterns, substitute strings, etc.)
Using the re module
How to substitute the string "green" with "blue"?
How to find all the IP addresses in a variable? How to find them in a file?
Python Strings
Find the first repeated character in a string
While you iterate through the characters, store them in a dictionary and check for every character whether it's already in the dictionary.
def firstRepeatedCharacter(str):
chars = {}
for ch in str:
if ch in chars:
return ch
else:
chars[ch] = 0
How to extract the unique characters from a string? for example given the input "itssssssameeeemarioooooo" the output will be "mrtisaoe"
x = "itssssssameeeemarioooooo"
y = ''.join(set(x))
Find all the permutations of a given string
def permute_string(string):
if len(string) == 1:
return [string]
permutations = []
for i in range(len(string)):
swaps = permute_string(string[:i] + string[(i+1):])
for swap in swaps:
permutations.append(string[i] + swap)
return permutations
print(permute_string("abc"))
Short way (but probably not acceptable in interviews):
from itertools import permutations
[''.join(p) for p in permutations("abc")]
Detailed answer can be found here: http://codingshell.com/python-all-string-permutations
How to check if a string contains a sub string?
Find the frequency of each character in string
Count the number of spaces in a string
Given a string, find the N most repeated words
Given the string (which represents a matrix) "1 2 3\n4 5 6\n7 8 9" create rows and colums variables (should contain integers, not strings)
What is the result of each of the following?
>> ', '.join(["One", "Two", "Three"])
>> " ".join("welladsadgadoneadsadga".split("adsadga")[:2])
>> "".join(["c", "t", "o", "a", "o", "q", "l"])[0::2]
>> ', '.join(["One", "Two", "Three"])
>> " ".join("welladsadgadoneadsadga".split("adsadga")[:2])
>> "".join(["c", "t", "o", "a", "o", "q", "l"])[0::2]
>>> 'One, Two, Three'
>>> 'well done'
>>> 'cool'
If x = "pizza"
, what would be the result of x[::-1]
?
It will reverse the string, so x would be equal to azzip
.
Reverse each word in a string (while keeping the order)
What is the output of the following code: "".join(["a", "h", "m", "a", "h", "a", "n", "q", "r", "l", "o", "i", "f", "o", "o"])[2::3]
mario
Python Iterators
What is an iterator?
Python Misc
Explain data serialization and how do you perform it with Python
How do you handle argument parsing in Python?
What is a generator? Why using generators?
What would be the output of the following block?
for i in range(3, 3):
print(i)
for i in range(3, 3):
print(i)
No output :)
What is yeild
? When would you use it?
Explain the following types of methods and how to use them:
- Static method
- Class method
- instance method
How to reverse a list?
How to combine list of strings into one string with spaces between the strings
You have the following list of nested lists: [['Mario', 90], ['Geralt', 82], ['Gordon', 88]]
How to sort the list by the numbers in the nested lists?
One way is:
the_list.sort(key=lambda x: x[1])
Explain the following:
- zip()
- map()
- filter()
Python - Slicing
For the following slicing exercises, assume you have the following list: my_list = [8, 2, 1, 10, 5, 4, 3, 9]
What is the result of `my_list[0:4]`?
What is the result of `my_list[5:6]`?
What is the result of `my_list[5:5]`?
What is the result of `my_list[::-1]`?
What is the result of `my_list[::3]`?
What is the result of `my_list[2:]`?
What is the result of `my_list[:3]`?
Python Debugging
How do you debug Python code?
pdb :D
How to check how much time it took to execute a certain script or block of code?
What empty return
returns?
Short answer is: It returns a None object.
We could go a bit deeper and explain the difference between
def a ():
return
>>> None
And
def a ():
pass
>>> None
Or we could be asked this as a following question, since they both give the same result.
We could use the dis module to see what's going on:
2 0 LOAD_CONST 0 (<code object a at 0x0000029C4D3C2DB0, file "<dis>", line 2>)
2 LOAD_CONST 1 ('a')
4 MAKE_FUNCTION 0
6 STORE_NAME 0 (a)
5 8 LOAD_CONST 2 (<code object b at 0x0000029C4D3C2ED0, file "<dis>", line 5>)
10 LOAD_CONST 3 ('b')
12 MAKE_FUNCTION 0
14 STORE_NAME 1 (b)
16 LOAD_CONST 4 (None)
18 RETURN_VALUE
Disassembly of <code object a at 0x0000029C4D3C2DB0, file "<dis>", line 2>:
3 0 LOAD_CONST 0 (None)
2 RETURN_VALUE
Disassembly of <code object b at 0x0000029C4D3C2ED0, file "<dis>", line 5>:
6 0 LOAD_CONST 0 (None)
2 RETURN_VALUE
An empty return
is exactly the same as return None
and functions without any explicit return
will always return None regardless of the operations, therefore
def sum(a, b):
global c
c = a + b
>>> None
How to improve the following block of code?
li = []
for i in range(1, 10):
li.append(i)
li = []
for i in range(1, 10):
li.append(i)
[i for i in range(1, 10)]
Given the following function
def is_int(num):
if isinstance(num, int):
print('Yes')
else:
print('No')
What would be the result of is_int(2) and is_int(False)?
def is_int(num):
if isinstance(num, int):
print('Yes')
else:
print('No')
Python - Linked List
Can you implement a linked list in Python?
The reason we need to implement in the first place, it's because a linked list isn't part of Python standard library.
To implement a linked list, we have to implement two structures: The linked list itself and a node which is used by the linked list.
Let's start with a node. A node has some value (the data it holds) and a pointer to the next node
class Node(object):
def __init__(self, data):
self.data = data
self.next = None
Now the linked list. An empty linked list has nothing but an empty head.
class LinkedList(object):
def __init__(self):
self.head = None
Now we can start using the linked list
ll = Linkedlist()
ll.head = Node(1)
ll.head.next = Node(2)
ll.head.next.next = Node(3)
What we have is:
| 1 | -> | 2 | -> | 3 |
Usually, more methods are implemented, like a push_head() method where you insert a node at the beginning of the linked list
def push_head(self, value):
new_node = Node(value)
new_node.next = self.head
self.head = new_node
Add a method to the Linked List class to traverse (print every node's data) the linked list
def print_list(self): node = self.head while(node): print(node.data) node = node.next
Write a method to that will return a boolean based on whether there is a loop in a linked list or not
Let's use the Floyd's Cycle-Finding algorithm:
def loop_exists(self):
one_step_p = self.head
two_steps_p = self.head
while(one_step_p and two_steps_p and two_steps_p.next):
one_step_p = self.head.next
two_step_p = self.head.next.next
if (one_step_p == two_steps_p):
return True
return False
Python - Stack
Implement Stack in Python
Python Testing
What is your experience with writing tests in Python?
What is PEP8? Give an example of 3 style guidelines
PEP8 is a list of coding conventions and style guidelines for Python
5 style guidelines:
1. Limit all lines to a maximum of 79 characters.
2. Surround top-level function and class definitions with two blank lines.
3. Use commas when making a tuple of one element
4. Use spaces (and not tabs) for indentation
5. Use 4 spaces per indentation level
How to test if an exception was raised?
What assert
does in Python?
Explain mocks
How do you measure execution time of small code snippets?
Why one shouldn't use assert
in non-test/production code?
Flask
Can you describe what is Django/Flask and how you have used it? Why Flask and not Djano? (or vice versa)
What is a route?
What is a blueprint in Flask?
What is a template?
zip
Given x = [1, 2, 3]
, what is the result of list(zip(x))?
[(1,), (2,), (3,)]
What is the result of each of the following:
list(zip(range(5), range(50), range(50)))
list(zip(range(5), range(50), range(-2)))
list(zip(range(5), range(50), range(50)))
list(zip(range(5), range(50), range(-2)))
[(0, 0, 0), (1, 1, 1), (2, 2, 2), (3, 3, 3), (4, 4, 4)]
[]
Python Descriptors
What would be the result of running a.num2
assuming the following code
class B:
def __get__(self, obj, objtype=None):
reuturn 10
class A:
num1 = 2
num2 = Five()
class B:
def __get__(self, obj, objtype=None):
reuturn 10
class A:
num1 = 2
num2 = Five()
10
What would be the result of running some_car = Car("Red", 4)
assuming the following code
class Print:
def __get__(self, obj, objtype=None):
value = obj._color
print("Color was set to {}".format(valie))
return value
def __set__(self, obj, value):
print("The color of the car is {}".format(value))
obj._color = value
class Car:
color = Print()
def __ini__(self, color, age):
self.color = color
self.age = age
class Print:
def __get__(self, obj, objtype=None):
value = obj._color
print("Color was set to {}".format(valie))
return value
def __set__(self, obj, value):
print("The color of the car is {}".format(value))
obj._color = value
class Car:
color = Print()
def __ini__(self, color, age):
self.color = color
self.age = age
An instance of Car class will be created and the following will be printed: "The color of the car is Red"
Python Misc
How can you spawn multiple processes with Python?
Implement simple calculator for two numbers
def add(num1, num2):
return num1 + num2
def sub(num1, num2):
return num1 - num2
def mul(num1, num2):
return num1*num2
def div(num1, num2):
return num1 / num2
operators = {
'+': add,
'-': sub,
'*': mul,
'/': div
}
if __name__ == '__main__':
operator = str(input("Operator: "))
num1 = int(input("1st number: "))
num2 = int(input("2nd number: "))
print(operators[operator](num1, num2))
What data types are you familiar with that are not Python built-in types but still provided by modules which are part of the standard library?
This is a good reference https://docs.python.org/3/library/datatypes.html
Explain what is a decorator
In python, everything is an object, even functions themselves. Therefore you could pass functions as arguments for another function eg;
def wee(word):
return word
def oh(f):
return f + "Ohh"
>>> oh(wee("Wee"))
<<< Wee Ohh
This allows us to control the before execution of any given function and if we added another function as wrapper, (a function receiving another function that receives a function as parameter) we could also control the after execution.
Sometimes we want to control the before-after execution of many functions and it would get tedious to write
f = function(function_1())
f = function(function_1(function_2(*args)))
every time, that's what decorators do, they introduce syntax to write all of this on the go, using the keyword '@'.
Can you show how to write and use decorators?
These two decorators (ntimes and timer) are usually used to display decorators functionalities, you can find them in lots of
tutorials/reviews. I first saw these examples two years ago in pyData 2017. https://www.youtube.com/watch?v=7lmCu8wz8ro&t=3731s
Simple decorator:
def deco(f):
print(f"Hi I am the {f.__name__}() function!")
return f
@deco
def hello_world():
return "Hi, I'm in!"
a = hello_world()
print(a)
>>> Hi I am the hello_world() function!
Hi, I'm in!
This is the simplest decorator version, it basically saves us from writting a = deco(hello_world())
.
But at this point we can only control the before execution, let's take on the after:
def deco(f):
def wrapper(*args, **kwargs):
print("Rick Sanchez!")
func = f(*args, **kwargs)
print("I'm in!")
return func
return wrapper
@deco
def f(word):
print(word)
a = f("************")
>>> Rick Sanchez!
************
I'm in!
deco receives a function -> f wrapper receives the arguments -> *args, **kwargs
wrapper returns the function plus the arguments -> f(*args, **kwargs) deco returns wrapper.
As you can see we conveniently do things before and after the execution of a given function.
For example, we could write a decorator that calculates the execution time of a function.
import time
def deco(f):
def wrapper(*args, **kwargs):
before = time.time()
func = f(*args, **kwargs)
after = time.time()
print(after-before)
return func
return wrapper
@deco
def f():
time.sleep(2)
print("************")
a = f()
>>> 2.0008859634399414
Or create a decorator that executes a function n times.
def n_times(n):
def wrapper(f):
def inner(*args, **kwargs):
for _ in range(n):
func = f(*args, **kwargs)
return func
return inner
return wrapper
@n_times(4)
def f():
print("************")
a = f()
>>>************
************
************
************
Write a decorator that calculates the execution time of a function
Write a script which will determine if a given host is accessible on a given port
Are you familiar with Dataclasses? Can you explain what are they used for?
You wrote a class to represent a car. How would you compare two cars instances if two cars are equal if they have the same model and color?
class Car:
def __init__(self, model, color):
self.model = model
self.color = color
def __eq__(self, other):
if not isinstance(other, Car):
return NotImplemented
return self.model == other.model and self.color == other.color
>> a = Car('model_1', 'red')
>> b = Car('model_2', 'green')
>> c = Car('model_1', 'red')
>> a == b
False
>> a == c
True
Explain Context Manager
Tell me everything you know about concurrency in Python
Explain the Buffer Protocol
Do you have experience with web scraping? Can you describe what have you used and for what?
Can you implement Linux's tail
command in Python? Bonus: implement head
as well
You have created a web page where a user can upload a document. But the function which reads the uploaded files, runs for a long time, based on the document size and user has to wait for the read operation to complete before he/she can continue using the web site. How can you overcome this?
How yield works exactly?
Monitoring
Explain monitoring. What is it? What its goal?
Google: "Monitoring is one of the primary means by which service owners keep track of a system’s health and availability".
What is wrong with the old approach of watching for a specific value and trigger an email/phone alert while value is exceeded?
This approach require from a human to always check why the value exceeded and how to handle it while today, it is more effective to notify people only when they need to take an actual action. If the issue doesn't require any human intervention, then the problem can be fixed by some processes running in the relevant environment.
What types of monitoring outputs are you familiar with and/or used in the past?
Alerts
Tickets
Logging
What is the difference between infrastructure monitoring and application monitoring? (methods, tools, ...)
Prometheus
What is Prometheus? What are some of Prometheus's main features?
In what scenarios it might be better to NOT use Prometheus?
From Prometheus documentation: "if you need 100% accuracy, such as for per-request billing".
Describe Prometheus architecture and components
Can you compare Prometheus to other solutions like InfluxDB for example?
What is an Alert?
Describe the following Prometheus components:
- Prometheus server
- Push Gateway
- Alert Manager
Prometheus server is responsible for scraping and storing the data
Push gateway is used for short-lived jobs
Alert manager is responsible for alerts ;)
What is an Instance? What is a Job?
What core metrics types Prometheus supports?
What is an exporter? What is it used for?
Which Prometheus best practices are you familiar with?. Name at least three
How to get total requests in a given period of time?
What HA in Prometheus means?
How do you join two metrics?
How to write a query that returns the value of a label?
How do you convert cpu_user_seconds to cpu usage in percentage?
Git
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
My first Commit | Commit | Exercise | Solution | |
Time to Branch | Branch | Exercise | Solution | |
Squashing Commits | Commit | Exercise | Solution |
How do you know if a certain directory is a git repository?
You can check if there is a ".git" directory.
Explain the following: git directory
, working directory
and staging area
This answer taken from git-scm.com
"The Git directory is where Git stores the meta data and object database for your project. This is the most important part of Git, and it is what is copied when you clone a repository from another computer.
The working directory is a single checkout of one version of the project. These files are pulled out of the compressed database in the Git directory and placed on disk for you to use or modify.
The staging area is a simple file, generally contained in your Git directory, that stores information about what will go into your next commit. It’s sometimes referred to as the index, but it’s becoming standard to refer to it as the staging area."
What is the difference between git pull
and git fetch
?
Shortly, git pull = git fetch + git merge
When you run git pull, it gets all the changes from the remote or central repository and attaches it to your corresponding branch in your local repository.
git fetch gets all the changes from the remote repository, stores the changes in a separate branch in your local repository
How to check if a file is tracked and if not, then track it?
There are different ways to check whether a file is tracked or not:
git ls-file <file>
-> exit code of 0 means it's trackedgit blame <file>
...
How can you see which changes have done before committing them?
`git diff```
What git status
does?
You have two branches - main and devel. How do you make sure devel is in sync with main?
git checkout main
git pull
git checkout devel
git merge main
Git - Merge
You have two branches - main and devel. How do you put devel into main?
git checkout main git merge devel git push origin main
How to resolve git merge conflicts?
First, you open the files which are in conflict and identify what are the conflicts. Next, based on what is accepted in your company or team, you either discuss with your colleagues on the conflicts or resolve them by yourself After resolving the conflicts, you add the files with `git add ` Finally, you run `git rebase --continue`
What merge strategies are you familiar with?
Mentioning two or three should be enough and it's probably good to mention that 'recursive' is the default one.
recursive resolve ours theirs
This page explains it the best: https://git-scm.com/docs/merge-strategies
Explain Git octopus merge
Probably good to mention that it's:
- It's good for cases of merging more than one branch (and also the default of such use cases)
- It's primarily meant for bundling topic branches together
This is a great article about Octopus merge: http://www.freblogg.com/2016/12/git-octopus-merge.html
What is the difference between git reset
and git revert
?
git revert
creates a new commit which undoes the changes from last commit.
git reset
depends on the usage, can modify the index or change the commit which the branch head
is currently pointing at.
Git - Rebase
You would like to move forth commit to the top. How would you achieve that?
Using the git rebase
command
In what situations are you using git rebase
?
How do you revert a specific file to previous commit?
git checkout HEAD~1 -- /path/of/the/file
How to squash last two commits?
What is the .git
directory? What can you find there?
The
.git
folder contains all the information that is necessary for your project in version control and all the information about commits, remote repository address, etc. All of them are present in this folder. It also contains a log that stores your commit history so that you can roll back to history.
This info copied from https://stackoverflow.com/questions/29217859/what-is-the-git-folder
What are some Git anti-patterns? Things that you shouldn't do
- Not waiting too long between commits
- Not removing the .git directory :)
How do you remove a remote branch?
You delete a remote branch with this syntax:
git push origin :[branch_name]
Are you familiar with gitattributes? When would you use it?
gitattributes allow you to define attributes per pathname or path pattern.
You can use it for example to control endlines in files. In Windows and Unix based systems, you have different characters for new lines (\r\n and \n accordingly). So using gitattributes we can align it for both Windows and Unix with * text=auto
in .gitattributes for anyone working with git. This is way, if you use the Git project in Windows you'll get \r\n and if you are using Unix or Linux, you'll get \n.
How do you discard local file changes? (before commit)
git checkout -- <file_name>
How do you discard local commits?
git reset HEAD~1
for removing last commit
If you would like to also discard the changes you `git reset --hard``
True or False? To remove a file from git but not from the filesystem, one should use git rm
False. If you would like to keep a file on your filesystem, use git reset <file_name>
Go
What are some characteristics of the Go programming language?
- Strong and static typing - the type of the variables can't be changed over time and they have to be defined at compile time
- Simplicity
- Fast compile times
- Built-in concurrency
- Garbage collected
- Platform independent
- Compile to standalone binary - anything you need to run your app will be compiled into one binary. Very useful for version management in run-time.
Go also has good community.
What is the difference between var x int = 2
and x := 2
?
The result is the same, a variable with the value 2.
With var x int = 2
we are setting the variable type to integer while with x := 2
we are letting Go figure out by itself the type.
True or False? In Go we can redeclare variables and once declared we must use it.
False. We can't redeclare variables but yes, we must used declared variables.
What libraries of Go have you used?
This should be answered based on your usage but some examples are:
- fmt - formatted I/O
What is the problem with the following block of code? How to fix it?
func main() {
var x float32 = 13.5
var y int
y = x
}
func main() {
var x float32 = 13.5
var y int
y = x
}
The following block of code tries to convert the integer 101 to a string but instead we get "e". Why is that? How to fix it?
package main
import "fmt"
func main() {
var x int = 101
var y string
y = string(x)
fmt.Println(y)
}
package main
import "fmt"
func main() {
var x int = 101
var y string
y = string(x)
fmt.Println(y)
}
It looks what unicode value is set at 101 and uses it for converting the integer to a string.
If you want to get "101" you should use the package "strconv" and replace y = string(x)
with y = strconv.Itoa(x)
What is wrong with the following code?:
package main
func main() {
var x = 2
var y = 3
const someConst = x + y
}
package main
func main() {
var x = 2
var y = 3
const someConst = x + y
}
Constants in Go can only be declared using constant expressions.
But x
, y
and their sum is variable.
const initializer x + y is not a constant
What will be the output of the following block of code?:
package main
import "fmt"
const (
x = iota
y = iota
)
const z = iota
func main() {
fmt.Printf("%v\n", x)
fmt.Printf("%v\n", y)
fmt.Printf("%v\n", z)
}
package main
import "fmt"
const (
x = iota
y = iota
)
const z = iota
func main() {
fmt.Printf("%v\n", x)
fmt.Printf("%v\n", y)
fmt.Printf("%v\n", z)
}
Go's iota identifier is used in const declarations to simplify definitions of incrementing numbers. Because it can be used in expressions, it provides a generality beyond that of simple enumerations.
x
and y
in the first iota group, z
in the second.
Iota page in Go Wiki
What _ is used for in Go?
It avoids having to declare all the variables for the returns values.
It is called the blank identifier.
answer in SO
What will be the output of the following block of code?:
package main
import "fmt"
const (
_ = iota + 3
x
)
func main() {
fmt.Printf("%v\n", x)
}
package main
import "fmt"
const (
_ = iota + 3
x
)
func main() {
fmt.Printf("%v\n", x)
}
Since the first iota is declared with the value 3
( + 3
), the next one has the value 4
What will be the output of the following block of code?:
package main
import (
"fmt"
"sync"
"time"
)
func main() {
var wg sync.WaitGroup
wg.Add(1)
go func() {
time.Sleep(time.Second * 2)
fmt.Println("1")
wg.Done()
}()
go func() {
fmt.Println("2")
}()
wg.Wait()
fmt.Println("3")
}
package main
import (
"fmt"
"sync"
"time"
)
func main() {
var wg sync.WaitGroup
wg.Add(1)
go func() {
time.Sleep(time.Second * 2)
fmt.Println("1")
wg.Done()
}()
go func() {
fmt.Println("2")
}()
wg.Wait()
fmt.Println("3")
}
Output: 2 1 3
What will be the output of the following block of code?:
package main
import (
"fmt"
)
func mod1(a []int) {
for i := range a {
a[i] = 5
}
fmt.Println("1:", a)
}
func mod2(a []int) {
a = append(a, 125) // !
for i := range a {
a[i] = 5
}
fmt.Println("2:", a)
}
func main() {
s1 := []int{1, 2, 3, 4}
mod1(s1)
fmt.Println("1:", s1)
s2 := []int{1, 2, 3, 4}
mod2(s2)
fmt.Println("2:", s2)
}
package main
import (
"fmt"
)
func mod1(a []int) {
for i := range a {
a[i] = 5
}
fmt.Println("1:", a)
}
func mod2(a []int) {
a = append(a, 125) // !
for i := range a {
a[i] = 5
}
fmt.Println("2:", a)
}
func main() {
s1 := []int{1, 2, 3, 4}
mod1(s1)
fmt.Println("1:", s1)
s2 := []int{1, 2, 3, 4}
mod2(s2)
fmt.Println("2:", s2)
}
Output:
1 [5 5 5 5]
1 [5 5 5 5]
2 [5 5 5 5 5]
2 [1 2 3 4]
In mod1
a is link, and when we're using a[i]
, we're changing s1
value to.
But in mod2
, append
creats new slice, and we're changing only a
value, not s2
.
What will be the output of the following block of code?:
package main
import (
"container/heap"
"fmt"
)
// An IntHeap is a min-heap of ints.
type IntHeap []int
func (h IntHeap) Len() int { return len(h) }
func (h IntHeap) Less(i, j int) bool { return h[i] < h[j] }
func (h IntHeap) Swap(i, j int) { h[i], h[j] = h[j], h[i] }
func (h *IntHeap) Push(x interface{}) {
// Push and Pop use pointer receivers because they modify the slice's length,
// not just its contents.
*h = append(*h, x.(int))
}
func (h *IntHeap) Pop() interface{} {
old := *h
n := len(old)
x := old[n-1]
*h = old[0 : n-1]
return x
}
func main() {
h := &IntHeap{4, 8, 3, 6}
heap.Init(h)
heap.Push(h, 7)
fmt.Println((*h)[0])
}
package main
import (
"container/heap"
"fmt"
)
// An IntHeap is a min-heap of ints.
type IntHeap []int
func (h IntHeap) Len() int { return len(h) }
func (h IntHeap) Less(i, j int) bool { return h[i] < h[j] }
func (h IntHeap) Swap(i, j int) { h[i], h[j] = h[j], h[i] }
func (h *IntHeap) Push(x interface{}) {
// Push and Pop use pointer receivers because they modify the slice's length,
// not just its contents.
*h = append(*h, x.(int))
}
func (h *IntHeap) Pop() interface{} {
old := *h
n := len(old)
x := old[n-1]
*h = old[0 : n-1]
return x
}
func main() {
h := &IntHeap{4, 8, 3, 6}
heap.Init(h)
heap.Push(h, 7)
fmt.Println((*h)[0])
}
Output: 3
Mongo
What are the advantages of MongoDB? Or in other words, why choosing MongoDB and not other implementation of NoSQL?
MongoDB advantages are as followings:
- Schemaless
- Easy to scale-out
- No complex joins
- Structure of a single object is clear
What is the difference between SQL and NoSQL?
The main difference is that SQL databases are structured (data is stored in the form of tables with rows and columns - like an excel spreadsheet table) while NoSQL is unstructured, and the data storage can vary depending on how the NoSQL DB is set up, such as key-value pair, document-oriented, etc.
In what scenarios would you prefer to use NoSQL/Mongo over SQL?
- Heterogeneous data which changes often
- Data consistency and integrity is not top priority
- Best if the database needs to scale rapidly
What is a document? What is a collection?
What is an aggregator?
What is better? Embedded documents or referenced?
Have you performed data retrieval optimizations in Mongo? If not, can you think about ways to optimize a slow data retrieval?
Queries
Explain this query: db.books.find({"name": /abc/})
Explain this query: db.books.find().sort({x:1})
What is the difference between find() and find_one()?
How can you export data from Mongo DB?
- mongoexport
- programming languages
OpenShift
OpenShift Exercises
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
My First Project | Projects | Exercise | Solution |
OpenShift Self Assessment
What is OpenShift?
OpenShift is a container orchestration platform based on Kubernetes.
It can be used for deploying applications while having minimal management overhead.
How OpenShift is related to Kubernetes?
It's built on top of Kubernetes while defining its own custom resources in addition to the built ones.
True or False? OpenShift is a IaaS (infrastructure as a service) solution
False. OpenShift is a PaaS (platform as a service) solution.
OpenShift - Architecture
What types of nodes OpenShift has?
- Workers: Where the end-user applications are running
- Masters: Responsible for managing the cluster
Which component responsible for determining pod placement?
The Scheduler.
What else the scheduler responsible for except pod placement?
Application high availability by spreading pod replicas between worker nodes
OpenShift - Projects
What is a project in OpenShift?
A project in OpenShift is a Kubernetes namespace with annotations.
In simpler words, think about it as an isolated environment for users to manage and organize their resources (like Pods, Deployments, Service, etc.).
How to list all projects? What the "STATUS" column means in projects list output?
oc get projects
will list all projects. The "STATUS" column can be used to see which projects are currently active.
You have a new team member and you would like to assign to him the "admin" role on your project in OpenShift. How to achieve that?
oc adm policy add-role-to-user <role> <user> -n <project>
OpenShift - Images
What is an image stream?
What would be the best way to run and manage multiple OpenShift environments?
Federation
OpenShift - Federation
What is OpenShift Federation?
Management and deployment of services and workloads accross multiple independent clusters from a single API
Explain the following in regards to Federation:
- Multi Cluster
- Federated Cluster
- Host Cluster
- Member Cluster
- Multi Cluster - Multiple clusters deployed independently, not being aware of each other
- Federated Cluster - Multiple clusters managed by the OpenShift Federation Control Plane
- Host Cluster - The cluster that runs the Federation Control Plane
- Member Cluster - Cluster that is part of the Federated Cluster and connected to Federation Control Plane
OpenShift - Storage
What is a storage device? What storage devices are there?
- Hard Disks
- SSD
- USB
- Magnetic Tape
What is Random Seek Time?
The time it takes for a disk to reach the place where the data is located and read a single block/sector.
Bones question: What is the random seek time in SSD and Magnetic Disk? Answer: Magnetic is about 10ms and SSD is somewhere between 0.08 and 0.16ms
OpenShift - Pods
What happens when a pod fails or exit due to container crash
Master node automatically restarts the pod unless it fails too often.
What happens when a pod fails too often?
It's marked as bad by the master node and temporarly not restarted anymore.
How to find out on which node a certain pod is running?
oc get po -o wide
OpenShift - Services
Explain Services and their benefits
- Services in OpenShift define access policy to one or more set of pods.
- They are connecting applications together by enabling communication between them
- They provide permanent internal IP addresses and hostnames for applications
- They are able to provide basic internal load balancing
OpenShift - Labels
Explain labels. What are they? When do you use them?
- Labels are used to group or select API objects
- They are simple key-value pairs and can be included in metadata of some objects
- A common use case: group pods, services, deployments, ... all related to a certain application
OpenShift - Service Accounts
How to list Service Accounts?
oc get serviceaccounts
OpenShift - Networking
What is a Route?
A route is exposing a service by giving it hostname which is externally reachable
What Route is consists of?
- name
- service selector
- (optional) security configuration
True or False? Router container can run only on the Master node
False. It can run on any node.
Given an example of how a router is used
- Client is using an address of application running on OpenShift
- DNS resolves to host running the router
- Router checks whether route exists
- Router proxies the request to the internal pod
OpenShift - Security
What are "Security Context Constraints"?
From OpenShift Docs: "Similar to the way that RBAC resources control user access, administrators can use security context constraints (SCCs) to control permissions for pods".
How to add the ability for the user `user1` to view the project `wonderland` assuming you are authorized to do so
oc adm policy add-role-to-user view user1 -n wonderland
How to check what is the current context?
oc whoami --show-context
OpenShift - Serverless
What is OpenShift Serverless?
- In general 'serverless' is a cloud computing model where scaling and provisioning is taken care for application developers, so they can focus on the development aspect rather infrastructure related tasks
- OpenShift Serverless allows you to dynamically scale your applications and provides the ability to build event-driven applications, whether the sources are on Kubernetes, the cloud or on-premise solutions
- OpenShift Serverless is based on the Knative project.
What are some of the event sources you can use with OpenShift Serverless?
- Kafka
- Kubernetes APIs
- AWS Kinesis
- AWS SQS
- JIRA
- Slack
More are supported and provided with OpenShift.
Explain serverless functions
What is the difference between Serverless Containers and Serverless functions?
OpenShift - Misc
What is Replication Controller?
Replication Controller responsible for ensuring the specified number of pods is running at all times.
If more pods are running than needed -> it deletes some of them
If not enough pods are running -> it creates more
Shell Scripting
Shell Scripting Exercises
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
Hello World | Variables | Exercise | Solution | Basic |
Basic date | Variables | Exercise | Solution | Basic |
Great Day | Variables | Exercise | Solution | Basic |
Factors | Arithmetic | Exercise | Solution | Basic |
Argument Check | Conditionals | Exercise | Solution | Basic |
Files Size | For Loops | Exercise | Solution | Basic |
Count Chars | Input + While Loops | Exercise | Solution | Basic |
Sum | Functions | Exercise | Solution | Basic |
Number of Arguments | Case Statement | Exercise | Solution | Basic |
Empty Files | Misc | Exercise | Solution | Basic |
Directories Comparison | Misc | Exercise | :( | Basic |
It's alive! | Misc | Exercise | Solution | Intermediate |
Shell Scripting - Self Assessment
What does this line in shell scripts means?: #!/bin/bash
#!/bin/bash
is She-bang
/bin/bash is the most common shell used as default shell for user login of the linux system. The shell’s name is an acronym for Bourne-again shell. Bash can execute the vast majority of scripts and thus is widely used because it has more features, is well developed and better syntax.
True or False? When a certain command/line fails in a shell script, the shell script, by default, will exit and stop running
Depends on the language and settings used. If the script is a bash script then this statement is true. When a script written in Bash fails to run a certain command it will keep running and will execute all other commands mentioned after the command which failed.
Most of the time we might actually want the opposite to happen. In order to make Bash exist when a specific command fails, use 'set -e' in your script.
What do you tend to include in every script you write?
Few example:
- Comments on how to run it and/or what it does
- If a shell script, adding "set -e" since I want the script to exit if a certain command failed
You can have an entirely different answer. It's based only on your experience and preferences.
Today we have tools and technologies like Ansible, Puppet, Chef, ... Why would someone still use shell scripting?
- Speed
- Flexibility
- The module we need doesn't exist (perhaps a weak point because most CM technologies allow to use what is known as "shell" module)
- We are delivering the scripts to customers who don't have access to the public network and don't necessarily have Ansible installed on their systems.
Shell Scripting - Variables
How to define a variable with the value "Hello World"?
HW="Hello World
How to define a variable with the value of the current date?
DATE=$(date)
How to print the first argument passed to a script?
echo $1
Write a script to print "yay" unless an argument was passed and then print that argument
echo "${1:-yay}"
What would be the output of the following script?
#!/usr/bin/env bash
NINJA_TURTLE=Donatello
function the_best_ninja_turtle {
local NINJA_TURTLE=Michelangelo
echo $NINJA_TURTLE
}
NINJA_TURTLE=Raphael
the_best_ninja_turtle
#!/usr/bin/env bash
NINJA_TURTLE=Donatello
function the_best_ninja_turtle {
local NINJA_TURTLE=Michelangelo
echo $NINJA_TURTLE
}
NINJA_TURTLE=Raphael
the_best_ninja_turtle
Michelangelo
Explain what would be the result of each command:
echo $0
echo $?
echo $$
echo $#
echo $0
echo $?
echo $$
echo $#
What is $@
?
What is difference between $@
and $*
?
$@
is an array of all the arguments passed to the script
$*
is a single string of all the arguments passed to the script
How do you get input from the user in shell scripts?
Using the keyword read
so for example read x
will wait for user input and will store it in the variable x.
How to compare variables length?
if [ ${#1} -ne ${#2} ]; then
...
Shell Scripting - Conditionals
Explain conditionals and demonstrate how to use them
In shell scripting, how to negate a conditional?
In shell scripting, how to check if a given argument is a number?
regex='^[0-9]+$'
if [[ ${var//*.} =~ $regex ]]; then
...
Shell Scripting - Arithmetic Operations
How to perform arithmetic operations on numbers?
One way: $(( 1 + 2 ))
Another way: expr 1 + 2
How to perform arithmetic operations on numbers?
How to check if a given number has 4 as a factor?
if [ $(($1 % 4)) -eq 0 ]; then
Shell Scripting - Loops
What is a loop? What types of loops are you familiar with?
Demonstrate how to use loops
Shell Scripting - Troubleshooting
How do you debug shell scripts?
Answer depends on the language you are using for writing your scripts. If Bash is used for example then:
- Adding -x to the script I'm running in Bash
- Old good way of adding echo statements
If Python, then using pdb is very useful.
Running the following bash script, we don't get 2 as a result, why?
x = 2
echo $x
x = 2
echo $x
Should be x=2
Shell Scripting - Substring
How to extract everything after the last dot in a string?
${var//*.}
How to extract everything before the last dot in a string?
${var%.*}
Shell Scripting - Misc
Generate 8 digit random number
shuf -i 9999999-99999999 -n 1
Can you give an example to some Bash best practices?
What is the ternary operator? How do you use it in bash?
A short way of using if/else. An example:
What does the following code do and when would you use it?
diff <(ls /tmp) <(ls /var/tmp)
diff <(ls /tmp) <(ls /var/tmp)
It is called 'process substitution'. It provides a way to pass the output of a command to another command when using a pipe
|
is not possible. It can be used when a command does not support STDIN
or you need the output of multiple commands.
https://superuser.com/a/1060002/167769
What are you using for testing shell scripts?
bats
SQL
SQL Exercises
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
Functions vs. Comparisons | Query Improvements | Exercise | Solution |
SQL Self Assessment
What is SQL?
SQL (Structured Query Language) is a standard language for relational databases (like MySQL, MariaDB, ...).
It's used for reading, updating, removing and creating data in a relational database.
How is SQL Different from NoSQL
The main difference is that SQL databases are structured (data is stored in the form of tables with rows and columns - like an excel spreadsheet table) while NoSQL is unstructured, and the data storage can vary depending on how the NoSQL DB is set up, such as key-value pair, document-oriented, etc.
When is it best to use SQL? NoSQL?
SQL - Best used when data integrity is crucial. SQL is typically implemented with many businesses and areas within the finance field due to it's ACID compliance.
NoSQL - Great if you need to scale things quickly. NoSQL was designed with web applications in mind, so it works great if you need to quickly spread the same information around to multiple servers
Additionally, since NoSQL does not adhere to the strict table with columns and rows structure that Relational Databases require, you can store different data types together.
Practical SQL - Basics
For these questions, we will be using the Customers and Orders tables shown below:
Customers
Customer_ID | Customer_Name | Items_in_cart | Cash_spent_to_Date |
---|---|---|---|
100204 | John Smith | 0 | 20.00 |
100205 | Jane Smith | 3 | 40.00 |
100206 | Bobby Frank | 1 | 100.20 |
ORDERS
Customer_ID | Order_ID | Item | Price | Date_sold |
---|---|---|---|---|
100206 | A123 | Rubber Ducky | 2.20 | 2019-09-18 |
100206 | A123 | Bubble Bath | 8.00 | 2019-09-18 |
100206 | Q987 | 80-Pack TP | 90.00 | 2019-09-20 |
100205 | Z001 | Cat Food - Tuna Fish | 10.00 | 2019-08-05 |
100205 | Z001 | Cat Food - Chicken | 10.00 | 2019-08-05 |
100205 | Z001 | Cat Food - Beef | 10.00 | 2019-08-05 |
100205 | Z001 | Cat Food - Kitty quesadilla | 10.00 | 2019-08-05 |
100204 | X202 | Coffee | 20.00 | 2019-04-29 |
How would I select all fields from this table?
Select *
From Customers;
How many items are in John's cart?
Select Items_in_cart
From Customers
Where Customer_Name = "John Smith";
What is the sum of all the cash spent across all customers?
Select SUM(Cash_spent_to_Date) as SUM_CASH
From Customers;
How many people have items in their cart?
Select count(1) as Number_of_People_w_items
From Customers
where Items_in_cart > 0;
How would you join the customer table to the order table?
You would join them on the unique key. In this case, the unique key is Customer_ID in both the Customers table and Orders table
How would you show which customer ordered which items?
Select c.Customer_Name, o.Item
From Customers c
Left Join Orders o
On c.Customer_ID = o.Customer_ID;
Using a with statement, how would you show who ordered cat food, and the total amount of money spent?
with cat_food as (
Select Customer_ID, SUM(Price) as TOTAL_PRICE
From Orders
Where Item like "%Cat Food%"
Group by Customer_ID
)
Select Customer_name, TOTAL_PRICE
From Customers c
Inner JOIN cat_food f
ON c.Customer_ID = f.Customer_ID
where c.Customer_ID in (Select Customer_ID from cat_food);
Although this was a simple statement, the "with" clause really shines when a complex query needs to be run on a table before joining to another. With statements are nice, because you create a pseudo temp when running your query, instead of creating a whole new table.
The Sum of all the purchases of cat food weren't readily available, so we used a with statement to create the pseudo table to retrieve the sum of the prices spent by each customer, then join the table normally.
Which of the following queries would you use?
SELECT count(*) SELECT count(*)
FROM shawarma_purchases FROM shawarma_purchases
WHERE vs. WHERE
YEAR(purchased_at) == '2017' purchased_at >= '2017-01-01' AND
purchased_at <= '2017-31-12'
SELECT count(*) SELECT count(*)
FROM shawarma_purchases FROM shawarma_purchases
WHERE vs. WHERE
YEAR(purchased_at) == '2017' purchased_at >= '2017-01-01' AND
purchased_at <= '2017-31-12'
SELECT count(*)
FROM shawarma_purchases
WHERE
purchased_at >= '2017-01-01' AND
purchased_at <= '2017-31-12'
When you use a function (YEAR(purchased_at)
) it has to scan the whole database as opposed to using indexes and basically the column as it is, in its natural state.
Azure
What is Azure Portal?
Microsoft Docs: "The Azure portal is a web-based, unified console that provides an alternative to command-line tools. With the Azure portal, you can manage your Azure subscription by using a graphical user interface."
What is Azure Marketplace?
Microsoft Docs: "Azure marketplace helps connect users with Microsoft partners, independent software vendors, and startups that are offering their solutions and services, which are optimized to run on Azure."
Explain availability sets and availability zones
An availability set is a logical grouping of VMs that allows Azure to understand how your application is built to provide redundancy and availability. It is recommended that two or more VMs are created within an availability set to provide for a highly available application and to meet the 99.95% Azure SLA.
What is Azure Policy?
What is the Azure Resource Manager? Can you describe the format for ARM templates?
Explain Azure managed disks
Azure - Compute
What Azure compute services are you familiar with?
- Azure Virtual Machines
- Azure Batch
- Azure Service Fabric
- Azure Container Instances
- Azure Virtual Machine Scale Set?s
What "Azure Virtual Machines" service is used for?
Windows or Linux virtual machines
What "Azure Virtual Machine Scale Sets" service is used for?
Scaling Linux or Windows virtual machines used in Azure
What "Azure Functions" service is used for?
Azure Functions is the serverless compute service of Azure.
What "Azure Container Instances" service is used for?
Running containerized applications (without the need to provision virtual machines).
What "Azure Batch" service is used for?
Running parallel and high-performance computing applications
What "Azure Service Fabric" service is used for?
What "Azure Kubernetes" service is used for?
Azure - Network
What Azure network services are you familiar with?
What's an Azure region?
What is the N-tier architecture?
Azure Storage
What Azure storage services are you familiar with?
What storage options Azure supports?
Azure Security
What is the Azure Security Center? What are some of its features?
It's a monitoring service that provides threat protection across all of the services in Azure. More specifically, it:
- Provides security recommendations based on your usage
- Monitors security settings and continuously all the services
- Analyzes and identifies potential inbound attacks
- Detects and blocks malware using machine learning
What is Azure Active Directory?
Azure AD is a cloud-based identity service. You can use it as a standalone service or integrate it with existing Active Directory service you already running.
What is Azure Advanced Threat Protection?
What components are part of Azure ATP?
Where logs are stored in Azure Monitor?
Explain Azure Site Recovery
Explain what the advisor does
Explain VNet peering
Which protocols are available for configuring health probe
Explain Azure Active
What is a subscription? What types of subscriptions are there?
Explain what is a blob storage service
GCP
Explain GCP's architecture
What are the main components and services of GCP?
What GCP management tools are you familiar with?
Tell me what do you know about GCP networking
Explain Cloud Functions
What is Cloud Datastore?
What network tags are used for?
What are flow logs? Where are they enabled?
How do you list buckets?
What Compute metadata key allows you to run code at startup?
startap-script
What the following commands does? `gcloud deployment-manager deployments create`
What is Cloud Code?
It is a set of tools to help developers write, run and debug GCP kubernetes based applications. It provides built-in support for rapid iteration, debugging and running applications in development and production K8s environments.
Google Kubernetes Engine (GKE)
What is GKE
- It is the managed kubernetes service on GCP for deploying, managing and scaling containerised applications using Google infrastructure.
Anthos
What is Anthos
It is a managed application platform for organisations like enterprises that require quick modernisation and certain levels of consistency for their legacy applications in a hybrid or multicloud world. From this explanation the core ideas can be drawn from these statements;
- Managed -> the customer does not need to worry about the underlying software intergrations, they just enable the API.
- application platform -> It consists of open source tools like K8s, Knative, Istio and Tekton
- Enterprises -> these are usually organisations with complex needs
- Consistency -> to have the same policies declaratively initiated to be run anywhere securely e.g on-prem, GCP or other-clouds (AWS or Azure)
fun fact: Anthos is flower in greek, they grow in the ground (earth) but need rain from the clouds to flourish.
List the technical components that make up Anthos
- Infrastructure management - Google Kubernetes Engine (GKE)
- Cluster management - GKE, Ingress for Anthos
- Service management - Anthos Service Mesh
- Policy enforcement - Anthos Config Management, Anthos Enterprise Data Protection, Policy Controller
- Application deployment - CI/CD tools like Cloud Build, GitLab
- Application development - Cloud Code
What is the primary computing environment for Anthos to easily manage workload deployment?
- Google Kubernetes Engine (GKE)
How does Anthos handle the control plane and node components for GKE?
On GCP the kubernetes api-server is the only control plane component exposed to customers whilst compute engine manages instances in the project.
Which load balancing options are available?
- Networking load balancing for L4 and HTTP(S) Load Balancing for L7 which are both managed services that do not require additional configuration.
- Ingress for Anthos which allows the ability to deploy a load balancer that serves an application across multiple clusters on GKE
List and explain the enterprise security capabilities provided by Anthos
- Control plane security - GCP manages and maintains the K8s control plane out of the box. The user can secure the api-server by using master authorized networks and private clusters. These allow the user to disable access on the public IP address by assigning a private IP address to the master.
- Node security - By default workloads are provisioned on Compute engine instances that use Google's Container Optimised OS. This operating system implements a locked-down firewall, limited user accounts with root disabled and a read-only filesystem. There is a further option to enable GKE Sandbox for stronger isolation in multi-tenant deployment scenarios.
- Network security - Within a created cluster VPC, Anthos GKE leverages a powerful software-defined network that enables simple Pod-to-Pod communications. Network policies allow locking down ingress and egress connections in a given namespace. Filtering can also be implemented to incoming load-balanced traffic for services that require external access, by supplying whitelisted CIDR IP ranges.
- Workload security - Running workloads run with limited privileges, default Docker AppArmor security policies are applied to all Kubernetes Pods. Workload identity for Anthos GKE aligns with the open source kubernetes service accounts with GCP service account permissions.
- Audit logging - Adminstrators are given a way to retain, query, process and alert on events of the deployed environments.
How can workloads deployed on Anthos GKE on-prem clusters securely connect to Google Cloud services?
- Google Cloud Virtual Private Network (Cloud VPN) - this is for secure networking
- Google Cloud Key Management Service (Cloud KMS) - for key management
What is Island Mode configuration with regards to networking in Anthos GKE deployed on-prem?
- This is when pods can directly talk to each other within a cluster, but cannot be reached from outside the cluster thus forming an "island" within the network that is not connected to the external network.
Explain Anthos Config Management
It is a core component of the Anthos stack which provides platform, service and security operators with a single, unified approach to multi-cluster management that spans both on-premises and cloud environments. It closely follows K8s best practices, favoring declarative approaches over imperative operations, and actively monitors cluster state and applies the desired state as defined in Git. It includes three key components as follows:
- An importer that reads from a central Git repository
- A component that synchronises stored configuration data into K8s objects
- A component that monitors drift between desired and actual cluster configurations with a capability of reconciliation when need rises.
How does Anthos Config Management help?
It follows common modern software development practices which makes cluster configuration, management and policy changes auditable, revertable, and versionable easily enforcing IT governance and unifying resource management in an organisation.
What is Anthos Service Mesh?
- It is a suite of tools that assist in monitoring and managing deployed services on Anthos of all shapes and sizes whether running in cloud, hybrid or multi-cloud environments. It leverages the APIs and core components from Istio, a highly configurable and open-source service mesh platform.
Describe the two main components of Anthos Service Mesh
- Data plane - it consists of a set of distributed proxies that mediate all inbound and outbound network traffic between individual services which are configured using a centralised control plane and an open API
- Control plane - is a fully managed offering outside of Anthos GKE clusters to simplify management overhead and ensure highest possible availability.
What are the components of the managed control plane of Anthos Service Mesh?
- Traffic Director - it is GCP's fully managed service mesh traffic control plane, responsible for translating Istio API objects into configuration information for the distributed proxies, as well as directing service mesh ingress and egress traffic
- Managed CA - is a centralised certificate authority responsible for providing SSL certificates to each of the distributed proxies, authentication information and distributing secrets
- Operations tooling - formerly stackdriver, provides a managed ingestion point for observability and telemetry, specifically monitoring, tracing and logging data generated by each of the proxies. This powers the observability dashboard for operators to visually inspect their services and service dependencies assisting in the implementation of SRE best practices for monitoring SLIs and establishing SLOs.
How does Anthos Service Mesh help?
Tool and technology integration that makes up Anthos service mesh delivers signficant operational benefits to Anthos environments, with minimal additional overhead such as follows:
- Uniform observability - the data plane reports service to service communication back to the control plane generating a service dependency graph. Traffic inspection by the proxy inserts headers to facilitate distributed tracing, capturing and reporting service logs together with service-level metrics (i.e latency, errors, availability).
- Operational agility - fine-grained controls for managing the flow of inter-mesh (north-south) and intra-mesh (east-west) traffic are provided.
- Policy-driven security - policies can be enforced consistently across diverse protocols and runtimes as service communications are secured by default.
List possible use cases of traffic controls that can be implemented within Anthos Service Mesh
- Traffic splitting across differing service versions for canary or A/B testing
- Circuit breaking to prevent cascading failures
- Fault injection to help build resilient and fault-tolerant deployments
- HTTP header-based traffic steering between individual services or versions
What is Cloud Run for Anthos?
It is part of the Anthos stack that brings a serverless container experience to Anthos, offering a high-level platform experience on top of K8s clusters. It is built with Knative, an open-source operator for K8s that brings serverless application serving and eventing capabilities.
How does Cloud Run for Anthos simplify operations?
Platform teams in organisations that wish to offer developers additional tools to test, deploy and run applications can use Knative to enhance this experience on Anthos as Cloud Run. Below are some of the benefits;
- Easy migration from K8s deployments - Without Cloud Run, platform engineers have to configure deployment, service, and HorizontalPodAutoscalers(HPA) objects to a loadbalancer and autoscaling. If application is already serving traffic it becomes hard to change configurations or roll back efficiently. Using Cloud Run all this is managed thus the Knative service manifest describes the application to be autoscaled and loadbalanced
- Autoscaling - a sudden traffic spike may cause application containers in K8s to crash due to overload thus an efficient automated autoscaling is executed to serve the high volume of traffic
- Networking - it has built-in load balancing capabilities and policies for traffic splitting between multiple versions of an application.
- Releases and rollouts - supports the notion of the Knatibe API's revisions which describe new versions or different configurations of your application and canary deployments by splitting traffic.
- Monitoring - observing and recording metrics such as latency, error rate and requests per second.
List and explain three high-level out of the box autoscaling primitives offered by Cloud Run for Anthos that do not exist in K8s natively
- Rapid, request-based autoscaling - default autoscalers monitor request metrics which allows Cloud Run for Anthos to handle spiky traffic patterns smoothly
- Concurrency controls - limits such as max in-flight requests per container are enforced to ensure the container does not become overloaded and crash. More containers are added to handle the spiky traffic, buffering the requests.
- Scale to zero - if an application is inactive for a while Cloud Run scales it down to zero to reduce its footprint. Alternatively one can turn off scale-to-zero to prevent cold starts.
List some Cloud Run for Anthos use cases
As it does not support stateful applications or sticky sessions, it is suitable for running stateless applications such as:
- Machine learning model predictions e.g Tensorflow serving containers
- API gateways, API middleware, web front ends and Microservices
- Event handlers, ETL
OpenStack
What components/projects of OpenStack are you familiar with?
Can you tell me what each of the following services/projects is responsible for?:
- Nova
- Neutron
- Cinder
- Glance
- Keystone
- Nova - Manage virtual instances
- Cinder - Block Storage
- Keystone - Authentication service across the cloud
Identify the service/project used for each of the following:
- Copy or snapshot instances
- GUI for viewing and modifying resources
- Block Storage
- Manage virtual instances
- Glance - Images Service. Also used for copying or snapshot instances
- Horizon - GUI for viewing and modifying resources
- Cinder - Block Storage
- Nova - Manage virtual instances
What is a tenant/project?
Determine true or false:
- OpenStack is free to use
- The service responsible for networking is Glance
- The purpose of tenant/project is to share resources between different projects and users of OpenStack
Describe in detail how you bring up an instance with a floating IP
You get a call from a customer saying: "I can ping my instance but can't connect (ssh) it". What might be the problem?
What types of networks OpenStack supports?
How do you debug OpenStack storage issues? (tools, logs, ...)
How do you debug OpenStack compute issues? (tools, logs, ...)
OpenStack Deployment & TripleO
Have you deployed OpenStack in the past? If yes, can you describe how you did it?
Are you familiar with TripleO? How is it different from Devstack or Packstack?
You can read about TripleO right here
OpenStack Compute
Can you describe Nova in detail?
- Used to provision and manage virtual instances
- It supports Multi-Tenancy in different levels - logging, end-user control, auditing, etc.
- Highly scalable
- Authentication can be done using internal system or LDAP
- Supports multiple types of block storage
- Tries to be hardware and hypervisor agnostice
What do you know about Nova architecture and components?
- nova-api - the server which serves metadata and compute APIs
- the different Nova components communicate by using a queue (Rabbitmq usually) and a database
- a request for creating an instance is inspected by nova-scheduler which determines where the instance will be created and running
- nova-compute is the component responsible for communicating with the hypervisor for creating the instance and manage its lifecycle
OpenStack Networking (Neutron)
Explain Neutron in detail
- One of the core component of OpenStack and a standalone project
- Neutron focused on delivering networking as a service
- With Neutron, users can set up networks in the cloud and configure and manage a variety of network services
- Neutron interacts with:
- Keystone - authorize API calls
- Nova - nova communicates with neutron to plug NICs into a network
- Horizon - supports networking entities in the dashboard and also provides topology view which includes networking details
Explain each of the following components:
- neutron-dhcp-agent
- neutron-l3-agent
- neutron-metering-agent
- neutron-*-agtent
- neutron-server
- neutron-l3-agent - L3/NAT forwarding (provides external network access for VMs for example)
- neutron-dhcp-agent - DHCP services
- neutron-metering-agent - L3 traffic metering
- neutron-*-agtent - manages local vSwitch configuration on each compute (based on chosen plugin)
- neutron-server - exposes networking API and passes requests to other plugins if required
Explain these network types:
- Management Network
- Guest Network
- API Network
- External Network
- Management Network - used for internal communication between OpenStack components. Any IP address in this network is accessible only within the datacetner
- Guest Network - used for communication between instances/VMs
- API Network - used for services API communication. Any IP address in this network is publicly accessible
- External Network - used for public communication. Any IP address in this network is accessible by anyone on the internet
In which order should you remove the following entities:
- Network
- Port
- Router
- Subnet
- Port
- Subnet
- Router
- Network
There are many reasons for that. One for example: you can't remove router if there are active ports assigned to it.
What is a provider network?
What components and services exist for L2 and L3?
What is the ML2 plug-in? Explain its architecture
What is the L2 agent? How does it works and what is it responsible for?
What is the L3 agent? How does it works and what is it responsible for?
Explain what the Metadata agent is responsible for
What networking entities Neutron supports?
How do you debug OpenStack networking issues? (tools, logs, ...)
OpenStack - Glance
Explain Glance in detail
- Glance is the OpenStack image service
- It handles requests related to instances disks and images
- Glance also used for creating snapshots for quick instances backups
- Users can use Glance to create new images or upload existing ones
Describe Glance architecture
- glance-api - responsible for handling image API calls such as retrieval and storage. It consists of two APIs: 1. registry-api - responsible for internal requests 2. user API - can be accessed publicly
- glance-registry - responsible for handling image metadata requests (e.g. size, type, etc). This component is private which means it's not available publicly
- metadata definition service - API for custom metadata
- database - for storing images metadata
- image repository - for storing images. This can be a filesystem, swift object storage, HTTP, etc.
OpenStack - Swift
Explain Swift in detail
- Swift is Object Store service and is an highly available, distributed and consistent store designed for storing a lot of data
- Swift is distributing data across multiple servers while writing it to multiple disks
- One can choose to add additional servers to scale the cluster. All while swift maintaining integrity of the information and data replications.
Can users store by default an object of 100GB in size?
Not by default. Object Storage API limits the maximum to 5GB per object but it can be adjusted.
Explain the following in regards to Swift:
- Container
- Account
- Object
- Container - Defines a namespace for objects.
- Account - Defines a namespace for containers
- Object - Data content (e.g. image, document, ...)
True or False? there can be two objects with the same name in the same container but not in two different containers
False. Two objects can have the same name if they are in different containers.
OpenStack - Cinder
Explain Cinder in detail
- Cinder is OpenStack Block Storage service
- It basically provides used with storage resources they can consume with other services such as Nova
- One of the most used implementations of storage supported by Cinder is LVM
- From user perspective this is transparent which means the user doesn't know where, behind the scenes, the storage is located or what type of storage is used
Describe Cinder's components
- cinder-api - receives API requests
- cinder-volume - manages attached block devices
- cinder-scheduler - responsible for storing volumes
OpenStack - Keystone
Can you describe the following concepts in regards to Keystone?
- Role
- Tenant/Project
- Service
- Endpoint
- Token
- Role - A list of rights and privileges determining what a user or a project can perform
- Tenant/Project - Logical representation of a group of resources isolated from other groups of resources. It can be an account, organization, ...
- Service - An endpoint which the user can use for accessing different resources
- Endpoint - a network address which can be used to access a certain OpenStack service
- Token - Used for access resources while describing which resources can be accessed by using a scope
What are the properties of a service? In other words, how a service is identified?
Using:
- Name
- ID number
- Type
- Description
Explain the following: - PublicURL - InternalURL - AdminURL
- PublicURL - Publicly accessible through public internet
- InternalURL - Used for communication between services
- AdminURL - Used for administrative management
What is a service catalog?
A list of services and their endpoints
OpenStack Advanced - Services
Describe each of the following services
- Swift
- Sahara
- Ironic
- Trove
- Aodh
- Ceilometer
- Swift - highly available, distributed, eventually consistent object/blob store
- Sahara - Manage Hadoop Clusters
- Ironic - Bare Metal Provisioning
- Trove - Database as a service that runs on OpenStack
- Aodh - Alarms Service
- Ceilometer - Track and monitor usage
Identify the service/project used for each of the following:
- Database as a service which runs on OpenStack
- Bare Metal Provisioning
- Track and monitor usage
- Alarms Service
- Manage Hadoop Clusters
- highly available, distributed, eventually consistent object/blob store
- Database as a service which runs on OpenStack - Trove
- Bare Metal Provisioning - Ironic
- Track and monitor usage - Ceilometer
- Alarms Service - Aodh
- Manage Hadoop Clusters
- Manage Hadoop Clusters - Sahara
- highly available, distributed, eventually consistent object/blob store - Swift
OpenStack Advanced - Keystone
Can you describe Keystone service in detail?
- You can't have OpenStack deployed without Keystone
- It Provides identity, policy and token services
- The authentication provided is for both users and services
- The authorization supported is token-based and user-based.
- There is a policy defined based on RBAC stored in a JSON file and each line in that file defines the level of access to apply
Describe Keystone architecture
- There is a service API and admin API through which Keystone gets requests
- Keystone has four backends:
- Token Backend - Temporary Tokens for users and services
- Policy Backend - Rules management and authorization
- Identity Backend - users and groups (either standalone DB, LDAP, ...)
- Catalog Backend - Endpoints
- It has pluggable environment where you can integrate with:
- LDAP
- KVS (Key Value Store)
- SQL
- PAM
- Memcached
Describe the Keystone authentication process
- Keystone gets a call/request and checks whether it's from an authorized user, using username, password and authURL
- Once confirmed, Keystone provides a token.
- A token contains a list of user's projects so there is no to authenticate every time and a token can submitted instead
OpenStack Advanced - Compute (Nova)
What each of the following does?:
- nova-api
- nova-compuate
- nova-conductor
- nova-cert
- nova-consoleauth
- nova-scheduler
- nova-api - responsible for managing requests/calls
- nova-compute - responsible for managing instance lifecycle
- nova-conductor - Mediates between nova-compute and the database so nova-compute doesn't access it directly
What types of Nova proxies are you familiar with?
- Nova-novncproxy - Access through VNC connections
- Nova-spicehtml5proxy - Access through SPICE
- Nova-xvpvncproxy - Access through a VNC connection
OpenStack Advanced - Networking (Neutron)
Explain BGP dynamic routing
What is the role of network namespaces in OpenStack?
OpenStack Advanced - Horizon
Can you describe Horizon in detail?
- Django-based project focusing on providing an OpenStack dashboard and the ability to create additional customized dashboards
- You can use it to access the different OpenStack services resources - instances, images, networks, ...
- By accessing the dashboard, users can use it to list, create, remove and modify the different resources
- It's also highly customizable and you can modify or add to it based on your needs
What can you tell about Horizon architecture?
- API is backward compatible
- There are three type of dashboards: user, system and settings
- It provides core support for all OpenStack core projects such as Neutron, Nova, etc. (out of the box, no need to install extra packages or plugins)
- Anyone can extend the dashboards and add new components
- Horizon provides templates and core classes from which one can build its own dashboard
Security
What is DevSecOps? What its core principals?
What the "Zero Trust" concept means? How Organizations deal with it?
Codefresh definition: "Zero trust is a security concept that is centered around the idea that organizations should never trust anyone or anything that does not originate from their domains. Organizations seeking zero trust automatically assume that any external services it commissions have security breaches and may leak sensitive information"
What it means to be "FIPS compliant"?
What is a Certificate Authority?
Explain RBAC (Role-based Access Control)
Access control based on user roles (i.e., a collection of access authorizations a user receives based on an explicit or implicit assumption of a given role). Role permissions may be inherited through a role hierarchy and typically reflect the permissions needed to perform defined functions within an organization. A given role may apply to a single individual or to several individuals.
- RBAC mapped to job function, assumes that a person will take on different roles, overtime, within an organization and different responsibilities in relation to IT systems.
Security - Authentication and Authorization
Explain Authentication and Authorization
Authentication is the process of identifying whether a service or a person is who they claim to be. Authorization is the process of identifying what level of access the service or the person have (after authentication was done)
What authentication methods are there?
Give an example of basic authentication process
A user uses the browser to authenticate to some server. It does so by using the authorization field which is constructed from the username and the password combined with a single colon. The result string is encoded using a certain character set which is compatible with US-ASCII. The authorization method + a space is prepended to the encoded string.
Explain Token-based authentication
Explain Risk-based authentication
Explain what is Single Sign-On
SSO (Single Sign-on), is a method of access control that enables a user to log in once and gain access to the resources of multiple software systems without being prompted to log in again.
Explain MFA (Multi-Factor Authentication)
Multi-Factor Authentication (Also known as 2FA). Allows the user to present two pieces of evidence, credentials, when logging into an account.
- The credentials fall into any of these three categories: something you know (like a password or PIN), something you have (like a smart card), or something you are (like your fingerprint). Credentials must come from two different categories to enhance security.
Security - Passwords
How do you manage sensitive information (like passwords) in different tools and platforms?
What password attacks are you familiar with?
- Dictionary
- Brute force
- Password Spraying
- Social Engineering
- Whaling
- Vishing
- Phising
- Whaling
How to mitigate password attacks?
- Strong password policy
- Do not reuse passwords
- ReCaptcha
- Training personnel against Social Engineering
- Risk Based Authentication
- Rate limiting
- MFA
Security - Cookies
What are cookies? Explain cookie-based authentication
True or False? Cookie-based authentication is stateful
True. Cookie-based authentication session must be kept on both server and client-side.
Explain the flow of using cookies
- User enters credentials
- The server verifies the credentials -> a sessions is created and stored in the database
- A cookie with the session ID is set in the browser of that user
- On every request, the session ID is verified against the database
- The session is destroyed (both on client-side and server-side) when the user logs out
Security - SSH
What is SSH how does it work?
Wikipedia Definition: "SSH or Secure Shell is a cryptographic network protocol for operating network services securely over an unsecured network."
Hostinger.com Definition: "SSH, or Secure Shell, is a remote administration protocol that allows users to control and modify their remote servers over the Internet."
This site explains it in a good way.
What is the role of an SSH key?
Security - Cryptography
Explain Symmetrical encryption
A symmetric encryption is any technique where a key is used to both encrypt and decrypt the data/entire communication.
Explain Asymmetrical encryption
A asymmetric encryption is any technique where the there is two different keys that are used for encryption and decryption, these keys are known as public key and private key.
What is "Key Exchange" (or "key establishment") in cryptography?
Wikipedia: "Key exchange (also key establishment) is a method in cryptography by which cryptographic keys are exchanged between two parties, allowing use of a cryptographic algorithm."
True or False? The symmetrical encryption is making use of public and private keys where the private key is used to decrypt the data encrypted with a public key
False. This description fits the asymmetrical encryption.
True or False? The private key can be mathematically computed from a public key
False.
True or False? In the case of SSH, asymmetrical encryption is not used to the entire SSH session
True. It is only used during the key exchange algorithm of symmetric encryption.
What is Hashing?
How hashes are part of SSH?
Hashes used in SSH to verify the authenticity of messages and to verify that nothing tampered with the data received.
Explain the following:
- Vulnerability
- Exploits
- Risk
- Threat
What is XSS?
Cross Site Scripting (XSS) is an type of a attack when the attacker inserts browser executable code within a HTTP response. Now the injected attack is not stored in the web application, it will only affact the users who open the maliciously crafted link or third-party web page. A successful attack allows the attacker to access any cookies, session tokens, or other sensitive information retained by the browser and used with that site
You can test by detecting user-defined variables and how to input them. This includes hidden or non-obvious inputs such as HTTP parameters, POST data, hidden form field values, and predefined radio or selection values. You then analyze each found vector to see if their are potential vulnerabilities, then when found you craft input data with each input vector. Then you test the crafted input and see if it works.
What is an SQL injection? How to manage it?
SQL injection is an attack consists of inserts either a partial or full SQL query through data input from the browser to the web application. When a successful SQL injection happens it will allow the attacker to read sensitive information stored on the database for the web application.
You can test by using a stored procedure, so the application must be sanitize the user input to get rid of the tisk of code injection. If not then the user could enter bad SQL, that will then be executed within the procedure
What is Certification Authority?
How do you identify and manage vulnerabilities?
Explain "Privilege Restriction"
How HTTPS is different from HTTP?
What types of firewalls are there?
What is DDoS attack? How do you deal with it?
What is port scanning? When is it used?
What is the difference between asynchronous and synchronous encryption?
Explain Man-in-the-middle attack
Explain CVE and CVSS
What is ARP Poisoning?
Describe how do you secure public repositories
What is DNS Spoofing? How to prevent it?
DNS spoofing occurs when a particular DNS server’s records of “spoofed” or altered maliciously to redirect traffic to the attacker. This redirection of traffic allows the attacker to spread malware, steal data, etc.
Prevention
- Use encrypted data transfer protocols - Using end-to-end encryption vian SSL/TLS will help decrease the chance that a website / its visitors are compromised by DNS spoofing.
- Use DNSSEC - DNSSEC, or Domain Name System Security Extensions, uses digitally signed DNS records to help determine data authenticity.
- Implement DNS spoofing detection mechanisms - it’s important to implement DNS spoofing detection software. Products such as XArp help product against ARP cache poisoning by inspecting the data that comes through before transmitting it.
What can you tell me about Stuxnet?
Stuxnet is a computer worm that was originally aimed at Iran’s nuclear facilities and has since mutated and spread to other industrial and energy-producing facilities. The original Stuxnet malware attack targeted the programmable logic controllers (PLCs) used to automate machine processes. It generated a flurry of media attention after it was discovered in 2010 because it was the first known virus to be capable of crippling hardware and because it appeared to have been created by the U.S. National Security Agency, the CIA, and Israeli intelligence.
What can you tell me about the BootHole vulnerability?
What can you tell me about Spectre?
Spectre is an attack method which allows a hacker to “read over the shoulder” of a program it does not have access to. Using code, the hacker forces the program to pull up its encryption key allowing full access to the program
Explain OAuth
Explain "Format String Vulnerability"
Explain DMZ
Explain TLS
What is CSRF? How to handle CSRF?
Cross-Site Request Forgery (CSRF) is an attack that makes the end user to initate a unwanted action on the web application in which the user has a authenticated session, the attacker may user an email and force the end user to click on the link and that then execute malicious actions. When an CSRF attack is successful it will compromise the end user data
You can use OWASP ZAP to analyze a "request", and if it appears that there no protection against cross-site request forgery when the Security Level is set to 0 (the value of csrf-token is SecurityIsDisabled.) One can use data from this request to prepare a CSRF attack by using OWASP ZAP
Explain HTTP Header Injection vulnerability
HTTP Header Injection vulnerabilities occur when user input is insecurely included within server responses headers. If an attacker can inject newline characters into the header, then they can inject new HTTP headers and also, by injecting an empty line, break out of the headers into the message body and write arbitrary content into the application's response.
What security sources are you using to keep updated on latest news?
What TCP and UDP vulnerabilities are you familiar with?
Do using VLANs contribute to network security?
What are some examples of security architecture requirements?
What is air-gapped network (or air-gapped environment)? What its advantages and disadvantages?
Explain what is Buffer Overflow
A buffer overflow (or buffer overrun) occurs when the volume of data exceeds the storage capacity of the memory buffer. As a result, the program attempting to write the data to the buffer overwrites adjacent memory locations.
What is Nonce?
What is SSRF?
SSRF (Server-side request forgery) it's a vulnerability where you can make a server make arbitrary requests to anywhere you want.
Read more about it at portswigger.net
Explain MAC flooding attack
MAC address flooding attack (CAM table flooding attack) is a type of network attack where an attacker connected to a switch port floods the switch interface with very large number of Ethernet frames with different fake source MAC address.
What is port flooding?
What is "Diffie-Hellman key exchange" and how does it work?
Explain "Forward Secrecy"
What is Cache Poisoned Denial of Service?
CPDoS or Cache Poisoned Denial of Service. It poisons the CDN cache. By manipulating certain header requests, the attacker forces the origin server to return a Bad Request error which is stored in the CDN’s cache. Thus, every request that comes after the attack will get an error page.
Security - Threats
Explain "Advanced persistent threat (APT)"
What is a "Backdoor" in information security?
Puppet
What is Puppet? How does it works?
Explain Puppet architecture
Can you compare Puppet to other configuration management tools? Why did you chose to use Puppet?
Explain the following:
- Module
- Manifest
- Node
Explain Facter
What is MCollective?
Do you have experience with writing modules? Which module have you created and for what?
Explain what is Hiera
Elastic
What is the Elastic Stack?
The Elastic Stack consists of:
- Elasticsearch
- Kibana
- Logstash
- Beats
- Elastic Hadoop
- APM Server
Elasticserach, Logstash and Kibana are also known as the ELK stack.
Explain what is Elasticsearch
From the official docs:
"Elasticsearch is a distributed document store. Instead of storing information as rows of columnar data, Elasticsearch stores complex data structures that have been serialized as JSON documents"
What is Logstash?
From the blog:
"Logstash is a powerful, flexible pipeline that collects, enriches and transports data. It works as an extract, transform & load (ETL) tool for collecting log messages."
Explain what beats are
Beats are lightweight data shippers. These data shippers installed on the client where the data resides.
Examples of beats: Filebeat, Metricbeat, Auditbeat. There are much more.
What is Kibana?
From the official docs:
"Kibana is an open source analytics and visualization platform designed to work with Elasticsearch. You use Kibana to search, view, and interact with data stored in Elasticsearch indices. You can easily perform advanced data analysis and visualize your data in a variety of charts, tables, and maps."
Describe what happens from the moment an app logged some information until it's displayed to the user in a dashboard when the Elastic stack is used
The process may vary based on the chosen architecture and the processing you may want to apply to the logs. One possible workflow is:
- The data logged by the application is picked by filebeat and sent to logstash
- Logstash process the log based on the defined filters. Once done, the output is sent to Elasticsearch
- Elasticsearch stores the document it got and the document is indexed for quick future access
- The user creates visualizations in Kibana which based on the indexed data
- The user creates a dashboard which composed out of the visualization created in the previous step
Elasticsearch
What is a data node?
This is where data is stored and also where different processing takes place (e.g. when you search for a data).
What is a master node?
Par of a master node responsibilites:
- Track the status of all the nodes in the cluster
- Verify replicas are working and the data is available from every data node.
- No hot nodes (no data node that works much harder than other nodes)
While there can be multiple master nodes in reality only of them is the elected master node.
What is an ingest node?
A node which responsible for parsing the data. In case you don't use logstash then this node can recieve data from beats and parse it, similarly to how it can be parsed in Logstash.
What is Coordinating node?
A Coordinating node responsible for routing requests out and in to the cluser (data nodes).
How data is stored in elasticsearch?
- Data is stored in an index
- The index is spread across the cluster using shards
What is an Index?
Index in Elastic is in most cases compared to a whole database from the SQL/NoSQL world.
You can choose to have one index to hold all the data of your app or have multiple indices where each index holds different type of your app (e.g. index for each service your app is running).
The official docs also offer a great explanation (in general, it's really good documentation, as every project should have):
"An index can be thought of as an optimized collection of documents and each document is a collection of fields, which are the key-value pairs that contain your data"
Explain Shards
An index is split into shards and documents are hashed to a particular shard. Each shard may be on a different node in a cluster and each one of the shards is a self contained index.
This allows Elasticsearch to scale to an entire cluster of servers.
What is an Inverted Index?
From the official docs:
"An inverted index lists every unique word that appears in any document and identifies all of the documents each word occurs in."
What is a Document?
Continuing with the comparison to SQL/NoSQL a Document in Elastic is a row in table in the case of SQL or a document in a collection in the case of NoSQL. As in NoSQL a Document is a JSON object which holds data on a unit in your app. What is this unit depends on the your app. If your app related to book then each document describes a book. If you are app is about shirts then each document is a shirt.
You check the health of your elasticsearch cluster and it's red. What does it mean? What can cause the status to be yellow instead of green?
Red means some data is unavailable. Yellow can be caused by running single node cluster instead of multi-node.
True or False? Elasticsearch indexes all data in every field and each indexed field has the same data structure for unified and quick query ability
False. From the official docs:
"Each indexed field has a dedicated, optimized data structure. For example, text fields are stored in inverted indices, and numeric and geo fields are stored in BKD trees."
What reserved fields a document has?
- _index
- _id
- _type
Explain Mapping
What are the advantages of defining your own mapping? (or: when would you use your own mapping?)
- You can optimize fields for partial matching
- You can define custom formats of known fields (e.g. date)
- You can perform language-specific analysis
Explain Replicas
In a network/cloud environment where failures can be expected any time, it is very useful and highly recommended to have a failover mechanism in case a shard/node somehow goes offline or disappears for whatever reason. To this end, Elasticsearch allows you to make one or more copies of your index’s shards into what are called replica shards, or replicas for short.
Can you explain Term Frequency & Document Frequency?
Term Frequency is how often a term appears in a given document and Document Frequency is how often a term appears in all documents. They both are used for determining the relevance of a term by calculating Term Frequency / Document Frequency.
You check "Current Phase" under "Index lifecycle management" and you see it's set to "hot". What does it mean?
"The index is actively being written to". More about the phases here
What this command does? curl -X PUT "localhost:9200/customer/_doc/1?pretty" -H 'Content-Type: application/json' -d'{ "name": "John Doe" }'
It creates customer index if it doesn't exists and adds a new document with the field name which is set to "John Dow". Also, if it's the first document it will get the ID 1.
What will happen if you run the previous command twice? What about running it 100 times?
- If name value was different then it would update "name" to the new value
- In any case, it bumps version field by one
What is the Bulk API? What would you use it for?
Bulk API is used when you need to index multiple documents. For high number of documents it would be significantly faster to use rather than individual requests since there are less network roundtrips.
Query DSL
Explain Elasticsearch query syntax (Booleans, Fields, Ranges)
Explain what is Relevance Score
Explain Query Context and Filter Context
From the official docs:
"In the query context, a query clause answers the question “How well does this document match this query clause?” Besides deciding whether or not the document matches, the query clause also calculates a relevance score in the _score meta-field."
"In a filter context, a query clause answers the question “Does this document match this query clause?” The answer is a simple Yes or No — no scores are calculated. Filter context is mostly used for filtering structured data"
Describe how would an architecture of production environment with large amounts of data would be different from a small-scale environment
There are several possible answers for this question. One of them is as follows:
A small-scale architecture of elastic will consist of the elastic stack as it is. This means we will have beats, logstash, elastcsearch and kibana.
A production environment with large amounts of data can include some kind of buffering component (e.g. Reddis or RabbitMQ) and also security component such as Nginx.
Logstash
What are Logstash plugins? What plugins types are there?
- Input Plugins - how to collect data from different sources
- Filter Plugins - processing data
- Output Plugins - push data to different outputs/services/platforms
What is grok?
A logstash plugin which modifies information in one format and immerse it in another.
How grok works?
What grok patterns are you familiar with?
What is `_grokparsefailure?`
How do you test or debug grok patterns?
What are Logstash Codecs? What codecs are there?
Kibana
What can you find under "Discover" in Kibana?
The raw data as it is stored in the index. You can search and filter it.
You see in Kibana, after clicking on Discover, "561 hits". What does it mean?
Total number of documents matching the search results. If not query used then simply the total number of documents.
What can you find under "Visualize"?
"Visualize" is where you can create visual representations for your data (pie charts, graphs, ...)
What visualization types are supported/included in Kibana?
What visualization type would you use for statistical outliers
Describe in detail how do you create a dashboard in Kibana
Filebeat
What is Filebeat?
If one is using ELK, is it a must to also use filebeat? In what scenarios it's useful to use filebeat?
True or False? a single harvester harvest multiple files, according to the limits set in filebeat.yml
False. One harvester harvests one file.
What are filebeat modules?
Elastic Stack
How do you secure an Elastic Stack?
You can generate certificates with the provided elastic utils and change configuration to enable security using certificates model.
DNS
What is DNS? What is it used for?
DNS (Domain Name Systems) is a protocol used for converting domain names into IP addresses.
As you know computer networking is done with IP addresses (layer 3 of the OSI model) but for as humans it's hard to remember IP addresses, it's much easier to remember names. This why we need something such as DNS to convert any domain name we type into an IP address. You can think on DNS as a huge phonebook or database where each corresponding name has an IP.
What is DNS resolution?
The process of translating IP addresses to domain names.
What is a DNS record?
A mapping between domain name and an IP address.
How DNS works?
In general the process is as follows:
- The user types an address in the web browser (some_site.com)
- The operating system gets a request from the browser to translate the address the user entered
- A query created to check a local entry of the address exists in the system. In case it doesn't, the request is forwarded to the DNS resolver
- The Resolver is a server, usually configured by your ISP when you connect to the internet, that responsible for resolving your query by contacting other DNS servers
- The Resolver contacts the root nameserver (aka as .)
- The root nameserver responds with the address of the relevant Top Level Domain DNS server (if your address ends with org then the org TLD)
- The Resolver then contacts the TLD DNS and TLD DNS responds with the IP address that matches the address the user typed in the browser
- The Resolver passes this information to the browser
- The user is happy :D
Explain the resolution sequence of: www.site.com
It's resolved in this order:
- .
- .com
- site.com
- www.site.com
What is a A record?
A (Address) Maps a host name to an IP address. When a computer has multiple adapter cards and IP addresses, it should have multiple address records.
What is a AAAA record?
An AAAA Record performs the same function as an A Record, but for an IPv6 Address.
What is a PTR record?
While an A record points a domain name to an IP address, a PTR record does the opposite and resolves the IP address to a domain name.
What is a MX record?
MX (Mail Exchange) Specifies a mail exchange server for the domain, which allows mail to be delivered to the correct mail servers in the domain.
Is DNS using TCP or UDP?
DNS uses UDP port 53 for resolving queries either regular or reverse. DNS uses TCP for zone transfer.
True or False? DNS can be used for load balancing
True.
Which techniques a DNS can use for load balancing?
What is DNS Record TTL? Why do we need it?
What is a zone? What types of zones are there?
Distributed
Explain Distributed Computing (or Distributed System)
According to Martin Kleppmann:
"Many processes running on many machines...only message-passing via an unreliable network with variable delays, and the system may suffer from partial failures, unreliable clocks, and process pauses."
Another definition: "Systems that are physically separated, but logically connected"
What can cause a system to fail?
- Network
- CPU
- Memory
- Disk
Do you know what is "CAP theorem"? (aka as Brewer's theorem)
According to the CAP theorem, it's not possible for a distributed data store to provide more than two of the following at the same time:
- Availability: Every request receives a response (it doesn't has to be the most recent data)
- Consistency: Every request receives a response with the latest/most recent data
- Partition tolerance: Even if some the data is lost/dropped, the system keeps running
What are the problems with the following design? How to improve it?
1. The transition can take time. In other words, noticeable downtime. 2. Standby server is a waste of resources - if first application server is running then the standby does nothing
What are the problems with the following design? How to improve it?
Issues: If load balancer dies , we lose the ability to communicate with the application.
Ways to improve:
- Add another load balancer
- Use DNS A record for both load balancers
- Use message queue
What is "Shared-Nothing" architecture?
It's an architecture in which data is and retrieved from a single, non-shared, source usually exclusively connected to one node as opposed to architectures where the request can get to one of many nodes and the data will be retrieved from one shared location (storage, memory, ...).
Explain the Sidecar Pattern (Or sidecar proxy)
Misc
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
Highly Available "Hello World" | Exercise | Solution |
What happens when you type in a URL in an address bar in a browser?
- The browser searches for the record of the domain name IP address in the DNS in the following order:
- Browser cache
- Operating system cache
- The DNS server configured on the user's system (can be ISP DNS, public DNS, ...)
- If it couldn't find a DNS record locally, a full DNS resolution is started.
- It connects to the server using the TCP protocol
- The browser sends an HTTP request to the server
- The server sends an HTTP response back to the browser
- The browser renders the response (e.g. HTML)
- The browser then sends subsequent requests as needed to the server to get the embedded links, javascript, images in the HTML and then steps 3 to 5 are repeated.
TODO: add more details!
API
Explain what is an API
I like this definition from blog.christianposta.com:
"An explicitly and purposefully defined interface designed to be invoked over a network that enables software developers to get programmatic access to data and functionality within an organization in a controlled and comfortable way."
What is an API specification?
From swagger.io:
"An API specification provides a broad understanding of how an API behaves and how the API links with other APIs. It explains how the API functions and the results to expect when using the API"
True or False? API Definition is the same as API Specification
False. From swagger.io:
"An API definition is similar to an API specification in that it provides an understanding of how an API is organized and how the API functions. But the API definition is aimed at machine consumption instead of human consumption of APIs."
What is a Payload in API?
What is Automation? How it's related or different from Orchestration?
Automation is the act of automating tasks to reduce human intervention or interaction in regards to IT technology and systems.
While automation focuses on a task level, Orchestration is the process of automating processes and/or workflows which consists of multiple tasks that usually across multiple systems.
Tell me about interesting bugs you've found and also fixed
What is a Debuggger and how it works?
What services an application might have?
- Authorization
- Logging
- Authentication
- Ordering
- Front-end
- Back-end ...
What is Metadata?
Data about data. Basically, it describes the type of information that an underlying data will hold.
You can use one of the following formats: JSON, YAML, XML. Which one would you use? Why?
I can't answer this for you :)
What's KPI?
What's OKR?
What's the difference between KPI and OKR?
YAML
What is YAML?
Data serialization language used by many technologies today like Kubernetes, Ansible, etc.
True or False? Any valid JSON file is also a valid YAML file
True. Because YAML is superset of JSON.
What is the format of the following data?
{
applications: [
{
name: "my_app",
language: "python",
version: 20.17
}
]
}
{
applications: [
{
name: "my_app",
language: "python",
version: 20.17
}
]
}
JSON
What is the format of the following data?
applications:
- app: "my_app"
language: "python"
version: 20.17
applications:
- app: "my_app"
language: "python"
version: 20.17
YAML
How to write a multi-line string with YAML? What use cases is it good for?
someMultiLineString: |
look mama
I can write a multi-line string
I love YAML
It's good for use cases like writing a shell script where each line of the script is a different command.
What is the difference between someMultiLineString: |
to someMultiLineString: >
?
using >
will make the multi-line string to fold into a single line
someMultiLineString: >
This is actually
a single line
do not let appearances fool you
What are placeholders in YAML?
They allow you reference values instead of directly writing them and it is used like this:
username: {{ my.user_name }}
How can you define multiple YAML components in one file?
Using this: ---
For Examples:
document_number: 1
---
document_number: 2
Firmware
Explain what is a firmware
Wikipedia: "In computing, firmware is a specific class of computer software that provides the low-level control for a device's specific hardware. Firmware, such as the BIOS of a personal computer, may contain basic functions of a device, and may provide hardware abstraction services to higher-level software such as operating systems."
Customers and Service Providers
What is SLO (service-level objective)?
What is SLA (service-level agreement)?
Jira
Explain/Demonstrate the following types in Jira:
- Epic
- Story
- Task
What is a project in Jira?
Kafka
What is Kafka?
In Kafka, how to automatically balance brokers leadership of partitions in a cluster?
- Enable auto leader election and reduce the imbalance
percentage ratio
- Manually rebalance by using kafkat
- Configure group.initial.rebalance.delay.ms to 3000
- All of the above
Cassandra
When running a cassandra cluster, how often do you need to run nodetool repair in order to keep the cluster consistent?
- Within the columnFamily GC-grace Once a week
- Less than the compacted partition minimum bytes
- Depended on the compaction strategy
HTTP
What is HTTP?
Describe HTTP request lifecycle
- Resolve host by request to DNS resolver
- Client SYN
- Server SYN+ACK
- Client SYN
- HTTP request
- HTTP response
True or False? HTTP is stateful
False. It doesn't maintain state for incoming request.
How HTTP request looks like?
It consists of:
- Request line - request type
- Headers - content info like length, enconding, etc.
- Body (not always included)
What HTTP method types are there?
- GET
- POST
- HEAD
- PUT
- DELETE
- CONNECT
- OPTIONS
- TRACE
What HTTP response codes are there?
- 1xx - informational
- 2xx - Success
- 3xx - Redirect
- 4xx - Error, client fault
- 5xx - Error, server fault
What is HTTPS?
Explain HTTP Cookies
HTTP is stateless. To share state, we can use Cookies.
TODO: explain what is actually a Cookie
What is HTTP Pipelining?
You get "504 Gateway Timeout" error from an HTTP server. What does it mean?
The server didn't receive a response from another server it communicates with in a timely manner.
What is a proxy?
What is a reverse proxy?
What is CDN?
When you publish a project, you usually publish it with a license. What types of licenses are you familiar with and which one do you prefer to use?
Load Balancers
What is a load balancer?
A load balancer accepts (or denies) incoming network traffic from a client, and based on some criteria (application related, network, etc.) it distributes those communications out to servers (at least one).
What benefits load balancers provide?
- Scalability - using a load balancer, you can possibly add more servers in the backend to handle more requests/traffic from the clients, as opposed to using one server.
- Redundancy - if one server in the backend dies, the load balancer will keep forwarding the traffic/requests to the second server so users won't even notice one of the servers in the backend is down.
What load balancer techniques/algorithms are you familiar with?
- Round Robin
- Weighted Round Robin
- Least Connection
- Weighted Least Connection
- Resource Based
- Fixed Weighting
- Weighted Response Time
- Source IP Hash
- URL Hash
What are the drawbacks of round robin algorithm in load balancing?
- A simple round robin algorithm knows nothing about the load and the spec of each server it forwards the requests to. It is possible, that multiple heavy workloads requests will get to the same server while other servers will got only lightweight requests which will result in one server doing most of the work, maybe even crashing at some point because it unable to handle all the heavy workloads requests by its own.
- Each request from the client creates a whole new session. This might be a problem for certain scenarios where you would like to perform multiple operations where the server has to know about the result of operation so basically, being sort of aware of the history it has with the client. In round robin, first request might hit server X, while second request might hit server Y and ask to continue processing the data that was processed on server X already.
What is an Application Load Balancer?
In which scenarios would you use ALB?
At what layers a load balancer can operate?
L4 and L7
Can you perform load balancing without using a dedicated load balancer instance?
Yes, you can use DNS for performing load balancing.
What is DNS load balancing? What its advantages? When would you use it?
What are sticky sessions? What are their pros and cons?
Recommended read:
Cons:
- Can cause uneven load on instance (since requests routed to the same instances) Pros:
- Ensures in-proc sessions are not lost when a new request is created
Explain each of the following load balancing techniques
- Round Robin
- Weighted Round Robin
- Least Connection
- Weighted Least Connection
- Resource Based
- Fixed Weighting
- Weighted Response Time
- Source IP Hash
- URL Hash
Explain use case for connection draining?
To ensure that a Classic Load Balancer stops sending requests to instances that are de-registering or unhealthy, while keeping the existing connections open, use connection draining. This enables the load balancer to complete in-flight requests made to instances that are de-registering or unhealthy.
The maximum timeout value can be set between 1 and 3,600 seconds on both GCP and AWS.
Licenses
Are you familiar with "Creative Commons"? What do you know about it?
Explain the differences between copyleft and permissive licenses
In Copyleft, any derivative work must use the same licensing while in permissive licensing there are no such condition. GPL-3 is an example of copyleft license while BSD is an example of permissive license.
Random
How a search engine works?
How auto completion works?
What is a memory leak?
What is your favorite protocol?
SSH HTTP DHCP DNS ...
What is Cache API?
Storage
What types of storage formats are there?
- File
- Block
- Object
What types of storage devices are there?
What is a filesystem?
Explain Dark Data
HR
These are not DevOps related questions as you probably noticed, but since they are part of the DevOps interview process I've decided it might be good to keep them
Tell us little bit about yourself
Tell me about your last big project/task you worked on
What was most challenging part in the project you worked on?
How did you hear about us?
Tell them how did you hear about them :D Relax, there is no wrong or right answer here...I think.
How would you describe a good leadership?
Describe yourself in one word
Tell me about a time where you didn't agree on an implementation
How do you deal with a situation where key stakeholders are not around and a big decision needs to be made?
Where do you see yourself 5 years down the line?
Give an example of a time when you were able to change the view of a team about a particular tool/project/technology
Have you ever caused a service outage? (or broke a working project, tool, ...?)
If you worked in this area for more than 5 years it's hard to imagine the answer would be no. It also doesn't have to be big service outage. Maybe you merged some code that broke a project or its tests. Simply focus on what you learned from such experience.
Rank the following in order 1 to 5, where 1 is most important: salaray, benefits, career, team/people, work life balance
You know best your order just have a good thought if you really want to put salary in top or bottom....
You have three important tasks scheduled for today. One is for your boss, second for a colleague who is also a friend, third is for a customer. All tasks are equally important. What do you do first?
You have a colleague you don‘t get along with. Tell us some strategies how you create a good work relationship with them anyway.
Bad answer: I don't. Better answer: Every person has strengths and weaknesses. This is true also for colleagues I don't have good work relationship with and this is what helps me to create good work relationship with them. If I am able to highlight or recognize their strengths I'm able to focus mainly on that when communicating with them.
What do you love about your work?
You know the best, but some ideas if you find it hard to express yourself:
- Diversity
- Complexity
- Challenging
- Communication with several different teams
What are your responsibilities in your current position?
You know the best :)
Why should we hire you for the role?
You can use and elaborate on one or all of the following:
- Passion
- Motivation
- Autodidact
- Creativity (be able to support it with some actual examples)
Pointless Questions
Why do you want to work here?
Why are you looking to leave your current place?
What are your strengths and weaknesses?
Where do you see yourself in five years?
Team Lead
How would you improve productivity in your team?
Questions you CAN ask
A list of questions you as a candidate can ask the interviewer during or after the interview. These are only a suggestion, use them carefully. Not every interviewer will be able to answer these (or happy to) which should be perhaps a red flag warning for your regarding working in such place but that's really up to you.
What do you like about working here?
How does the company promote personal growth?
What is the current level of technical debt you are dealing with?
Be careful when asking this question - all companies, regardless of size, have some level of tech debt.
Phrase the question in the light that all companies have the deal with this, but you want to see the current
pain points they are dealing with
This is a great way to figure how managers deal with unplanned work, and how good they are at setting expectations with projects.
Why I should NOT join you? (or 'what you don't like about working here?')
What was your favorite project you've worked on?
This can give you insights in some of the cool projects a company is working on, and if you would enjoy working on projects like these. This is also a good way to see if the managers are allowing employees to learn and grow with projects outside of the normal work you'd do.
If you could change one thing about your day to day, what would it be?
Similar to the tech debt question, this helps you identify any pain points with the company.
Additionally, it can be a great way to show how you'd be an asset to the team.
For Example, if they mention they have problem X, and you've solved that in the past, you can show how you'd be able to mitigate that problem.
Let's say that we agree and you hire me to this position, after X months, what do you expect that I have achieved?
Not only this will tell you what is expected from you, it will also provide big hint on the type of work you are going to do in the first months of your job.
Testing
Explain white-box testing
Explain black-box testing
What are unit tests?
What types of tests would you run to test a web application?
Explain test harness?
What is A/B testing?
What is network simulation and how do you perform it?
What types of performances tests are you familiar with?
Explain the following types of tests:
- Load Testing
- Stress Testing
- Capacity Testing
- Volume Testing
- Endurance Testing
Databases
Name | Topic | Objective & Instructions | Solution | Comments |
---|---|---|---|---|
Message Board Tables | Relational DB Tables | Exercise | Solution |
What is a relational database?
- Data Storage: system to store data in tables
- SQL: programming language to manage relational databases
- Data Definition Language: a standard syntax to create, alter and delete tables
What does it mean when a database is ACID compliant?
ACID stands for Atomicity, Consistency, Isolation, Durability. In order to be ACID compliant, the database much meet each of the four criteria
Atomicity - When a change occurs to the database, it should either succeed or fail as a whole.
For example, if you were to update a table, the update should completely execute. If it only partially executes, the update is considered failed as a whole, and will not go through - the DB will revert back to it's original state before the update occurred. It should also be mentioned that Atomicity ensures that each transaction is completed as it's own stand alone "unit" - if any part fails, the whole statement fails.
Consistency - any change made to the database should bring it from one valid state into the next.
For example, if you make a change to the DB, it shouldn't corrupt it. Consistency is upheld by checks and constraints that are pre-defined in the DB. For example, if you tried to change a value from a string to an int when the column should be of datatype string, a consistent DB would not allow this transaction to go through, and the action would not be executed
Isolation - this ensures that a database will never be seen "mid-update" - as multiple transactions are running at the same time, it should still leave the DB in the same state as if the transactions were being run sequentially.
For example, let's say that 20 other people were making changes to the database at the same time. At the time you executed your query, 15 of the 20 changes had gone through, but 5 were still in progress. You should only see the 15 changes that had completed - you wouldn't see the database mid-update as the change goes through.
Durability - Once a change is committed, it will remain committed regardless of what happens (power failure, system crash, etc.). This means that all completed transactions must be recorded in non-volatile memory.
Note that SQL is by nature ACID compliant. Certain NoSQL DB's can be ACID compliant depending on how they operate, but as a general rule of thumb, NoSQL DB's are not considered ACID compliant
What is sharding?
Sharding is a horizontal partitioning.
Are you able to explain what is it good for?
You find out your database became a bottleneck and users experience issues accessing data. How can you deal with such situation?
Not much information provided as to why it became a bottleneck and what is current architecture, so one general approach could be
to reduce the load on your database by moving frequently-accessed data to in-memory structure.
What is a connection pool?
Connection Pool is a cache of database connections and the reason it's used is to avoid an overhead of establishing a connection for every query done to a database.
What is a connection leak?
A connection leak is a situation where database connection isn't closed after being created and is no longer needed.
What is Table Lock?
Your database performs slowly than usual. More specifically, your queries are taking a lot of time. What would you do?
- Query for running queries and cancel the irrelevant queries
- Check for connection leaks (query for running connections and include their IP)
- Check for table locks and kill irrelevant locking sessions
What is a Data Warehouse?
"A data warehouse is a subject-oriented, integrated, time-variant and non-volatile collection of data in support of organisation's decision-making process"
Explain what is a time-series database
What is OLTP (Online transaction processing)?
What is OLAP (Online Analytical Processing)?
What is an index in a database?
A database index is a data structure that improves the speed of operations in a table. Indexes can be created using one or more columns, providing the basis for both rapid random lookups and efficient ordering of access to records.
What data types are there in relational databases?
Explain Normalization
Data that is used multiple times in a database should be stored once and referenced with a foreign key.
This has the clear benefit of ease of maintenance where you need to change a value only in a single place to change it everywhere.
Explain Primary Key and Foreign Key
Primary Key: each row in every table should a unique identifier that represents the row.
Foreign Key: a reference to another table's primary key. This allows you to join table together to retrieve all the information you need without duplicating data.
What types of data tables have you used?
- Primary data table: main data you care about
- Details table: includes a foreign key and has one to many relationship
- Lookup values table: can be one table per lookup or a table containing all the lookups and has one to many relationship
- Multi reference table
What is ORM? What benefits it provides in regards to relational databases usage?
Wikipedia: "is a programming technique for converting data between incompatible type systems using object-oriented programming languages"
In regards to the relational databases:
- Database as code
- Database abstraction
- Encapsulates SQL complexity
- Enables code review process
- Enables usage as a native OOP structure
What is DDL?
Wikipedia: "In the context of SQL, data definition or data description language (DDL) is a syntax for creating and modifying database objects such as tables, indices, and users."
Regex
Given a text file, perform the following exercises
Extract
Extract all the numbers
Extract the first word of each line
Bonus: extract the last word of each line
Extract all the IP addresses
Extract dates in the format of yyyy-mm-dd or yyyy-dd-mm
Extract email addresses
Replace
Replace tabs with four spaces
Replace 'red' with 'green'
System Design
Explain what is a "Single point of failure"?
Explain "3-Tier Architecture" (including pros and cons)
What are the drawbacks of monolithic architecture?
- Not suitable for frequent code changes and the ability to deploy new features
- Not designed for today's infrastructure (like public clouds)
- Scaling a team to work monolithic architecture is more challenging
What are the advantages of microoservices architecture over a monolithic architecture?
- Each of the services individually fail without escalating into an application-wide outage.
- Each service can be developed and maintained by a separate team and this team can choose its own tools and coding language
Explain "Loose Coupling"
What is a message queue? When is it used?
Scalability
Explain Scalability
The ability easily grow in size and capacity based on demand and usage.
Explain Elasticity
The ability to grow but also to reduce based on what is required
Explain Disaster Recovery
Explain Fault Tolerance and High Availability
Fault Tolerance - The ability to self-heal and return to normal capacity. Also the ability to withstand a failure and remain functional.
High Availability - Being able to access a resource (in some use cases, using different platforms)
What is the difference between high availability and Disaster Recovery?
wintellect.com: "High availability, simply put, is eliminating single points of failure and disaster recovery is the process of getting a system back to an operational state when a system is rendered inoperative. In essence, disaster recovery picks up when high availability fails, so HA first."
Explain Vertical Scaling
Vertical Scaling is the process of adding resources to increase power of existing servers. For example, adding more CPUs, adding more RAM, etc.
What are the disadvantages of Vertical Scaling?
With vertical scaling alone, the component still remains a single point of failure. In addition, it has hardware limit where if you don't have more resources, you might not be able to scale vertically.
Explain Horizontal Scaling
Horizontal Scaling is the process of adding more resources that will be able handle requests as one unit
What is the disadvange of Horizontal Scaling? What is often required in order to perform Horizontal Scaling?
A load balancer. You can add more resources, but if you would like them to be part of the process, you have to serve them the requests/responses. Also, data inconsistency is a concern with horizontal scaling.
Explain in which use cases will you use vertical scaling and in which use cases you will use horizontal scaling
Explain Resiliency and what ways are there to make a system more resilient
Explain "Consistent Hashing"
How would you update each of the services in the following drawing without having app (foo.com) downtime?
What is the problem with the following architecture and how would you fix it?
The load on the producers or consumers may be high which will then cause them to hang or crash.
Instead of working in "push mode", the consumers can pull tasks only when they are ready to handle them. It can be fixed by using a streaming platform like Kafka, Kinesis, etc. This platform will make sure to handle the high load/traffic and pass tasks/messages to consumers only when the ready to get them.
Users report that there is huge spike in process time when adding little bit more data to process as an input. What might be the problem?
How would you scale the architecture from the previous question to hundreds of users?
Cache
What is "cache"? In which cases would you use it?
What is "distributed cache"?
Why not writing everything to cache instead of a database/datastore?
Migrations
How you prepare for a migration? (or plan a migration)
You can mention:
roll-back & roll-forward cut over dress rehearsals DNS redirection
Explain "Branch by Abstraction" technique
Design a system
Can you design a video streaming website?
Can you design a photo upload website?
How would you build a URL shortener?
More System Design Questions
Additional exercises can be found in system-design-notebook repository.
Hardware
What is a CPU?
A central processing unit (CPU) performs basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions in the program. This contrasts with external components such as main memory and I/O circuitry, and specialized processors such as graphics processing units (GPUs).
What is RAM?
RAM (Random Access Memory) is the hardware in a computing device where the operating system (OS), application programs and data in current use are kept so they can be quickly reached by the device's processor. RAM is the main memory in a computer. It is much faster to read from and write to than other kinds of storage, such as a hard disk drive (HDD), solid-state drive (SSD) or optical drive.
What is an embedded system?
An embedded system is a computer system - a combination of a computer processor, computer memory, and input/output peripheral devices—that has a dedicated function within a larger mechanical or electronic system. It is embedded as part of a complete device often including electrical or electronic hardware and mechanical parts.
Can you give an example of an embedded system?
Raspberry Pi
What types of storage are there?
Big Data
Explain what is exactly Big Data
As defined by Doug Laney:
- Volume: Extremely large volumes of data
- Velocity: Real time, batch, streams of data
- Variety: Various forms of data, structured, semi-structured and unstructured
- Veracity or Variability: Inconsistent, sometimes inaccurate, varying data
What is DataOps? How is it related to DevOps?
DataOps seeks to reduce the end-to-end cycle time of data analytics, from the origin of ideas to the literal creation of charts, graphs and models that create value. DataOps combines Agile development, DevOps and statistical process controls and applies them to data analytics.
What is Data Architecture?
An answer from talend.com:
"Data architecture is the process of standardizing how organizations collect, store, transform, distribute, and use data. The goal is to deliver relevant data to people who need it, when they need it, and help them make sense of it."
Explain the different formats of data
- Structured - data that has defined format and length (e.g. numbers, words)
- Semi-structured - Doesn't conform to a specific format but is self-describing (e.g. XML, SWIFT)
- Unstructured - does not follow a specific format (e.g. images, test messages)
What is a Data Warehouse?
Wikipedia's explanation on Data Warehouse Amazon's explanation on Data Warehouse
Can you explain the difference between a data lake and a data warehouse?
What is "Data Versioning"? What models of "Data Versioning" are there?
What is ETL?
Apache Hadoop
Explain Hadoop YARN
Responsible for managing the compute resources in clusters and scheduling users' applications
Explain Hadoop MapReduce
A programming model for large-scale data processing
Explain Hadoop Distributed File Systems (HDFS)
- Distributed file system providing high aggregate bandwidth across the cluster.
- For a user it looks like a regular file system structure but behind the scenes it's distributed across multiple machines in a cluster
- Typical file size is TB and it can scale and supports millions of files
- It's fault tolerant which means it provides automatic recovery from faults
- It's best suited for running long batch operations rather than live analysis
What do you know about HDFS architecture?
- Master-slave architecture
- Namenode - master, Datanodes - slaves
- Files split into blocks
- Blocks stored on datanodes
- Namenode controls all metadata
Ceph
Explain what is Ceph
True or False? Ceph favor consistency and correctness over performances
True
Which services or types of storage Ceph supports?
- Object (RGW)
- Block (RBD)
- File (CephFS)
What is RADOS?
- Reliable Autonomic Distributed Object Storage
- Provides low-level data object storage service
- Strong Consistency
- Simplifies design and implementation of higher layers (block, file, object)
Describe RADOS software components
- Monitor
- Central authority for authentication, data placement, policy
- Coordination point for all other cluster components
- Protect critical cluster state with Paxos
- Manager
- Aggregates real-time metrics (throughput, disk usage, etc.)
- Host for pluggable management functions
- 1 active, 1+ standby per cluster
- OSD (Object Storage Daemon)
- Stores data on an HDD or SSD
- Services client IO requests
What is the workflow of retrieving data from Ceph?
What is the workflow of retrieving data from Ceph?
What are "Placement Groups"?
Describe in the detail the following: Objects -> Pool -> Placement Groups -> OSDs
What is OMAP?
What is a metadata server? How it works?
Packer
What is Packer? What is it used for?
In general, Packer automates machine images creation. It allows you to focus on configuration prior to deployment while making the images. This allows you start the instances much faster in most cases.
Packer follows a "configuration->deployment" model or "deployment->configuration"?
A configuration->deployment which has some advantages like:
- Deployment Speed - you configure once prior to deployment instead of configuring every time you deploy. This allows you to start instances/services much quicker.
- More immutable infrastructure - with configuration->deployment it's not likely to have very different deployments since most of the configuration is done prior to the deployment. Issues like dependencies errors are handled/discovered prior to deployment in this model.
Certificates
If you are looking for a way to prepare for a certain exam this is the section for you. Here you'll find a list of certificates, each references to a separate file with focused questions that will help you to prepare to the exam. Good luck :)
AWS
- Cloud Practitioner (Latest update: 2020)
- Solutions Architect Associate (Latest update: 2021)
Azure
- AZ-900 (Latest update: 2021)
Kubernetes
- Certified Kubernetes Administrator (CKA) (Latest update: 2020)
Other DevOps Projects
Credits
Thanks to all of our amazing contributors who make it easy for everyone to learn new things :)
Logos credits can be found here