Concepts
Master essential DevOps concepts with practical examples and detailed explanations
Filter by Topic
Linux
13 concepts β’ 20 questions
What is Linux?
Linux is a free, open-source operating system kernel created by Linus Torvalds in 1991. Born from a university project, it has evolved into the foundation of modern computing infrastructure. Major distributions include Ubuntu (user-friendly), Red Hat Enterprise Linux (enterprise), CentOS (community enterprise), Debian (stable), and Alpine (lightweight). Linux powers 96.3% of the world's top 1 million web servers, 100% of supercomputers, and billions of Android devices. It's essential for DevOps because it enables containerization (Docker/Kubernetes), cloud computing (AWS/Azure/GCP), automation scripting, and provides the stable, secure foundation for CI/CD pipelines. Think of Linux like a universal translator for computers - it helps different software programs talk to the hardware, just like how a translator helps people who speak different languages communicate with each other.
Learning Sections
Choose your learning path
Basic Commands
Essential navigation, inspection and search tools: ls, cd, pwd, cat, less, grep, find, locate, which, man.
Code Examples
(4)$ ls -la
$ cd /home && pwd
$ grep 'error' app.log
$ find . -name '*.txt'
More Commands
(17)File Permissions & Ownership
r/w/x for user, group, others; chmod, chown, umask; symbolic vs numeric (e.g. 754).
Code Examples
(4)$ chmod 755 myscript.sh
$ chmod 644 config.txt
$ chown user:group myfile.txt
$ ls -l myfile.txt
More Commands
(14)Filesystem Structure
Key directories: /etc configs, /var variable data, /usr user apps, /opt optional, /tmp temp, /proc kernel view.
Code Examples
(4)$ ls /var/log
$ cd /etc && ls
$ ls /home
$ df -h
Process Management
Observe & control processes: ps, top/htop, kill, nice/renice, jobs & foreground/background.
Code Examples
(4)$ ps aux
$ top
$ kill 1234
$ jobs
More Commands
(11)Shell Scripting Basics
Bash scripting fundamentals: variables, loops, conditionals, functions, and automation examples.
Code Examples
(4)$ #!/bin/bash
echo "Deployment started"
$ APP_NAME="myapp"
echo "Deploying $APP_NAME"
$ if [ -f app.log ]; then
echo "Log file found"
fi
$ for server in web1 web2 web3; do
echo "Checking $server"
done
System Services & Init Systems
Managing services with systemd: systemctl commands, unit files, service states, and startup configuration.
Code Examples
(4)$ systemctl status nginx
$ systemctl restart docker
$ systemctl enable nginx
$ systemctl logs nginx
Logging & Monitoring
System logging with syslog, journalctl, log rotation, and monitoring tools for system health.
Code Examples
(4)$ tail -f /var/log/nginx/error.log
$ grep "ERROR" /var/log/app.log
$ journalctl -u docker
$ tail -100 /var/log/syslog
More Commands
(10)Cron Jobs & Scheduling
Task automation with cron: crontab syntax, scheduling patterns, and systemd timers as modern alternative.
Code Examples
(4)$ crontab -l
$ crontab -e
$ 0 2 * * * /backup/backup.sh
$ */5 * * * * /health/check.sh
Security Fundamentals
Linux security basics: user management, sudo configuration, firewall rules, and system hardening practices.
Code Examples
(4)$ sudo adduser newuser
$ sudo passwd username
$ sudo ufw status
$ who
More Commands
(16)Virtualization & Containers
Understanding VMs vs containers, Linux namespaces, cgroups, and container runtime fundamentals.
Code Examples
(4)$ docker ps
$ docker images
$ docker logs myapp
$ docker exec -it myapp bash
Performance Tuning
System performance analysis and optimization: monitoring tools, kernel parameters, and bottleneck identification.
Code Examples
(4)$ htop
$ free -h
$ uptime
$ df -h
More Commands
(12)Networking & Connectivity
Network configuration, connectivity testing, and troubleshooting tools for DevOps networking tasks.
Code Examples
(4)$ ping -c 4 8.8.8.8
$ curl -I https://example.com
$ ss -tulpen
$ dig A example.com +short
More Commands
(17)Storage & Filesystems
Disk management, filesystem operations, and storage troubleshooting for system administration.
Code Examples
(4)$ lsblk -f
$ df -h
$ sudo mount /dev/xvdf1 /data
$ du -sh * | sort -h
More Commands
(16)Docker
13 concepts β’ 20 questions
What is Docker?
Docker is a containerization platform that packages applications and their dependencies into lightweight, portable containers. It revolutionizes software deployment by ensuring consistency across different environments - from development to production. Docker eliminates 'it works on my machine' problems by creating isolated, reproducible environments. Core concepts include images, containers, Dockerfiles, volumes, networks, and registries for efficient application lifecycle management.
Learning Sections
Choose your learning path
Docker Architecture
Daemon (dockerd), REST API, CLI client, and registries (Docker Hub / private).
Code Examples
(4)$ docker info
$ docker version
$ docker system df
$ docker system prune -a
Images vs Containers
Images = immutable layered filesystem; containers = runtime instance with writable layer & process namespace.
Code Examples
(4)$ docker images
$ docker ps
$ docker ps -a
$ docker run nginx
Dockerfile Instructions
Core directives: FROM, COPY vs ADD, RUN, CMD, ENTRYPOINT, ENV, EXPOSE, WORKDIR, USER, HEALTHCHECK.
Code Examples
(4)$ FROM node:18-alpine
$ WORKDIR /app
$ COPY package.json .
$ RUN npm install
Volumes & Bind Mounts
Volumes managed by Docker; bind mounts map host paths directly. Persistence & portability differ.
Code Examples
(4)$ docker volume create mydata
$ docker run -v mydata:/app/data nginx
$ docker run -v $(pwd):/app nginx
$ docker volume ls
Docker Networking
Default bridge, host, none, user-defined bridge, overlay (Swarm), macvlan for L2 integration.
Code Examples
(4)$ docker network ls
$ docker network create mynetwork
$ docker run --network mynetwork nginx
$ docker run -p 8080:80 nginx
Multi-stage Builds
Use multiple FROM stages to build artifacts then copy only what you needβshrinks final image.
Code Examples
(1)$ FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
FROM node:20-alpine
WORKDIR /app
COPY --from=build /app/dist ./dist
CMD ["node", "dist/index.js"]
Image Tagging & Versioning
Tags are mutable pointers; digests (sha256) are immutable. Pin for reproducibility.
Code Examples
(2)$ docker pull nginx:1.27-alpine
$ docker inspect --format='{{.RepoDigests}}' nginx:1.27-alpine
Container Logs
'docker logs' reads stdout/stderr from default logging driver (json-file, fluentd, etc.).
Code Examples
(2)$ docker logs -f --tail=50 api
$ docker run --log-driver json-file --log-opt max-size=10m myimg
Container Lifecycle Management
Creating, starting, stopping, restarting, and removing containers with proper resource management.
Code Examples
(5)$ docker run -d --name web --restart=unless-stopped --memory=512m --cpus=0.5 nginx
$ docker exec -it web /bin/bash
$ docker stop --time=30 web
$ docker stats web
$ docker update --memory=1g --cpus=1.0 web
Security Best Practices
Container security: non-root users, image scanning, secrets management, and runtime protection.
Code Examples
(5)$ docker run --user 1001:1001 --read-only --tmpfs /tmp nginx
$ docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE nginx
$ trivy image nginx:latest
$ docker secret create db_password password.txt
$ docker run --security-opt=no-new-privileges nginx
Performance Optimization
Image size reduction, build optimization, runtime performance tuning, and resource management.
Code Examples
(5)$ docker build --target production --build-arg BUILDKIT_INLINE_CACHE=1 .
$ docker system df -v
$ docker build --cache-from myapp:latest --tag myapp:new .
$ docker run --memory=512m --oom-kill-disable=false myapp
$ docker image prune -a --filter 'until=24h'
Registry Management
Working with Docker registries: pushing, pulling, authentication, and private registry setup.
Code Examples
(5)$ docker tag myapp:latest registry.company.com/myapp:v1.2.3
$ docker push registry.company.com/myapp:v1.2.3
$ docker login -u username registry.company.com
$ docker manifest inspect nginx:latest
$ docker search --limit 5 --filter stars=100 nginx
Docker Compose Basics
Compose orchestrates multi-container apps via declarative YAML (services, networks, volumes).
Code Examples
(2)$ docker compose up -d
$ docker compose ps
Kubernetes
19 concepts β’ 20 questions
What is Kubernetes?
Kubernetes (K8s) is an orchestration platform that automates the deployment, scaling, and management of containerized applications across clusters of machines. It provides service discovery, load balancing, automated rollouts/rollbacks, and self-healing capabilities. Kubernetes abstracts infrastructure complexity, enabling teams to focus on application logic while ensuring high availability, scalability, and resource efficiency in cloud-native environments.
Learning Sections
Choose your learning path
What is Kubernetes?
Open-source container orchestration platform that automates deployment, scaling, and management of containerized applications.
Code Examples
(2)$ kubectl version --client
$ kubectl cluster-info
Kubernetes Architecture
Master-worker architecture with control plane managing cluster state and worker nodes running workloads.
Code Examples
(3)$ kubectl get nodes
$ kubectl get pods
$ kubectl get all
Pods - The Atomic Unit
Smallest deployable unit containing one or more containers sharing network and storage, representing a single instance of an application.
Code Examples
(4)$ kubectl run nginx --image=nginx
$ kubectl get pods
$ kubectl logs nginx
$ kubectl delete pod nginx
Deployments & ReplicaSets
Deployments manage ReplicaSets for declarative updates, scaling, and rollbacks of stateless applications.
Code Examples
(4)$ kubectl create deployment nginx --image=nginx
$ kubectl get deployments
$ kubectl scale deployment nginx --replicas=3
$ kubectl delete deployment nginx
ConfigMaps & Secrets
Externalize configuration and sensitive data from application code for better security and flexibility.
Code Examples
(4)$ kubectl create configmap myconfig --from-literal=ENV=prod
$ kubectl create secret generic mysecret --from-literal=password=secret123
$ kubectl get configmaps
$ kubectl get secrets
Services - Networking & Load Balancing
Stable network endpoints for pods with load balancing across replicas using ClusterIP, NodePort, and LoadBalancer types.
Code Examples
(4)$ kubectl expose deployment nginx --port=80
$ kubectl get services
$ kubectl describe service nginx
$ kubectl delete service nginx
Namespaces & Isolation
Logical partitions for names, RBAC scoping, resource quotas, network policies.
Code Examples
(1)$ kubectl get all -n kube-system
Volumes & Persistent Storage
Data persistence through volumes, PersistentVolumes (PV), and PersistentVolumeClaims (PVC) for stateful applications.
Code Examples
(3)$ kubectl get pv,pvc
$ kubectl describe pvc my-pvc
$ kubectl get storageclass
Probes: Readiness vs Liveness
Readiness gates traffic, liveness restarts unhealthy containers; startupProbe for slow boots.
Code Examples
(1)$ readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
StatefulSets - Stateful Applications
Manages stateful applications requiring stable network identities, persistent storage, and ordered deployment/scaling.
Code Examples
(3)$ kubectl get statefulsets
$ kubectl scale statefulset web --replicas=5
$ kubectl get pods -l app=web
Ingress Basics
HTTP routing layer mapping host/path to Services via ingress controller (nginx, traefik, gateway API evolving).
Code Examples
(1)$ kubectl apply -f ingress.yaml
Jobs & CronJobs - Batch Workloads
Run batch tasks to completion with Jobs, and schedule recurring tasks with CronJobs using cron syntax.
Code Examples
(3)$ kubectl create job backup --image=backup-tool
$ kubectl get jobs,cronjobs
$ kubectl create cronjob backup --image=backup-tool --schedule='0 2 * * *'
RBAC Basics
Role/ClusterRole define permissions; (Cluster)RoleBinding attaches them to subjects (users, groups, ServiceAccounts).
Code Examples
(1)$ kubectl get clusterrolebinding | grep admin
DaemonSets - Node-Level Services
Ensures a pod runs on all (or selected) nodes, typically for system-level services like monitoring and logging.
Code Examples
(3)$ kubectl get daemonsets -A
$ kubectl describe daemonset fluentd
$ kubectl get pods -o wide -l app=monitoring
kubectl Commands
Common verbs: get, describe, logs, exec, apply, rollout, top; use -o wide / jsonpath for details.
Code Examples
(2)$ kubectl logs deploy/api -f
$ kubectl rollout status deploy/api
Resource Limits & Requests
Control CPU and memory allocation with requests (guaranteed) and limits (maximum) for efficient resource management.
Code Examples
(3)$ kubectl top pods --sort-by=cpu
$ kubectl describe pod <pod-name> | grep -A 5 Limits
$ kubectl get limitrange
Labels & Selectors
Key/value metadata driving selection (Services, Deployments, NetworkPolicies). Immutable keys, mutable values.
Code Examples
(1)$ kubectl get pods -l app=web
Horizontal & Vertical Pod Autoscaling
Automatically scale applications based on CPU, memory, or custom metrics with HPA and VPA.
Code Examples
(3)$ kubectl autoscale deployment nginx --cpu-percent=50 --min=1 --max=10
$ kubectl get hpa
$ kubectl describe hpa nginx
Network Policies - Micro-segmentation
Control network traffic between pods using firewall-like rules for enhanced security and isolation.
Code Examples
(3)$ kubectl get networkpolicies -A
$ kubectl describe networkpolicy deny-all
$ kubectl exec -it pod1 -- curl pod2:8080
CI/CD
10 concepts β’ 20 questions
What is CI/CD?
CI/CD (Continuous Integration/Continuous Deployment) is a methodology that automates the software delivery pipeline from code commit to production deployment. CI focuses on automatically building, testing, and integrating code changes, while CD automates the deployment process. This approach reduces manual errors, accelerates release cycles, improves code quality through automated testing, and enables rapid feedback loops for development teams.
Learning Sections
Choose your learning path
What is CI/CD?
Continuous Integration and Continuous Delivery/Deployment practices that automate software development lifecycle from code commit to production.
Code Examples
(2)$ git push origin main
$ docker build -t app:latest .
CI/CD Core Principles
Fundamental practices: frequent integration, automated testing, deployment automation, and fast feedback loops.
Code Examples
(3)$ git push origin main
$ npm run test:ci
$ docker build -t app:${BUILD_NUMBER} .
Common Tools
Jenkins, GitHub Actions, GitLab CI, CircleCI, Argo Workflows, Tekton.
Code Examples
(1)$ github/workflows/ci.yml
Pipeline Stages
Typical sequence: checkout -> build -> test -> security scan -> package -> artifact publish -> deploy.
Code Examples
(1)$ stages: [build, test, deploy]
Triggers
Events starting pipelines: git push, PR, tag, manual approval, schedule (cron), API/webhook.
Code Examples
(1)$ on: [push, pull_request]
Artifact Management
Store built outputs immutably (containers, packages) with metadata & provenance.
Code Examples
(1)$ docker build -t registry/app:1.2.3 .
Environment Variables & Secrets
Parameterize pipelines with env vars; secure secrets via vaults, masked logs, least privilege.
Code Examples
(1)$ echo "${API_URL}"
Branching Strategies
GitFlow (feature/release/hotfix branches) vs Trunk-Based (short-lived branches, feature flags).
Code Examples
(1)$ git checkout -b feature/login
Testing in CI/CD
Automated testing pyramid: unit tests, integration tests, end-to-end tests, and quality gates for reliable deployments.
Code Examples
(3)$ jest --coverage --ci
$ cypress run --record
$ sonar-scanner -Dsonar.projectKey=myapp
Deployment Strategies
Blue-green, canary, rolling deployments, and feature flags for safe, controlled releases with minimal downtime.
Code Examples
(3)$ kubectl patch deployment app -p '{"spec":{"template":{"spec":{"containers":[{"name":"app","image":"app:v2"}]}}}'
$ istioctl kiali dashboard
$ kubectl rollout undo deployment/app
Monitoring
10 concepts β’ 20 questions
What is Monitoring?
Monitoring in DevOps involves collecting, analyzing, and visualizing system metrics, logs, and application performance data to ensure optimal operation and quick issue resolution. It includes infrastructure monitoring (CPU, memory, disk), application performance monitoring (APM), log aggregation, alerting, and observability practices. Effective monitoring enables proactive issue detection, performance optimization, and data-driven decision making for system reliability.
Learning Sections
Choose your learning path
Introduction to Monitoring
Monitoring is the continuous observation and measurement of system performance, availability, and health to ensure optimal operation and rapid issue detection.
Code Examples
(2)$ curl -s http://localhost:9090/metrics | grep cpu
$ docker stats --no-stream
Types of Monitoring
Infrastructure, application, network, and synthetic monitoring layers each provide different perspectives on system health and performance.
Code Examples
(2)$ sar -u 1 5
$ curl -w '%{time_total}
' -o /dev/null -s http://api.example.com
The Three Pillars: Metrics, Logs, Traces
The foundational telemetry data types that provide comprehensive system observability when used together.
Code Examples
(3)$ rate(http_requests_total[5m])
$ kubectl logs -f deployment/app --tail=100
$ trace_id=abc123 span_id=def456 duration=180ms
Prometheus - Metrics Collection
Open-source monitoring system with powerful query language, pull-based architecture, and built-in alerting capabilities.
Code Examples
(2)$ rate(http_requests_total{status=~"5.."}[5m]) > 0.1
$ prometheus --config.file=/etc/prometheus/prometheus.yml
Grafana - Data Visualization
Powerful dashboard and visualization platform supporting multiple data sources with rich graphing capabilities.
Code Examples
(2)$ sum(rate(http_requests_total[5m])) by (service)
$ grafana-cli plugins list-remote
ELK Stack - Log Management
Elasticsearch, Logstash, and Kibana stack for centralized log collection, processing, storage, and visualization.
Code Examples
(2)$ curl -X GET 'localhost:9200/_search?q=error&size=10'
$ filebeat setup --dashboards
Alerting Strategies & Best Practices
Effective alerting focuses on user-impacting symptoms with actionable, well-tuned notifications to minimize noise and maximize response effectiveness.
Code Examples
(2)$ rate(http_requests_total{status=~"5.."}[5m]) / rate(http_requests_total[5m]) > 0.05
$ amtool silence add alertname="HighErrorRate" --duration=2h
Distributed Tracing
End-to-end request tracking across microservices to understand performance bottlenecks and failure propagation in distributed systems.
Code Examples
(2)$ curl 'http://localhost:16686/api/traces?service=frontend&limit=10'
$ otel-collector --config=otel-config.yaml
SLO/SLI Monitoring
Service Level Objectives and Indicators provide user-focused reliability targets and measurement frameworks for service quality.
Code Examples
(2)$ sum(rate(http_requests_total{status!~"5.."}[5m])) / sum(rate(http_requests_total[5m]))
$ histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))
Dashboard Design & Visualization
Effective dashboard design follows hierarchy principles, focuses on actionable insights, and provides clear visual communication of system health.
Code Examples
(2)$ sum by (service) (rate(http_requests_total[5m]))
$ grafana-cli plugins install grafana-piechart-panel
Terraform
10 concepts β’ 20 questions
What is Terraform?
Terraform is an Infrastructure as Code (IaC) tool that enables you to define, provision, and manage cloud infrastructure using declarative configuration files. It supports multiple cloud providers and services, allowing you to version control your infrastructure, ensure consistency across environments, and automate resource provisioning. Terraform's state management and planning capabilities provide predictable infrastructure changes and drift detection.
Learning Sections
Choose your learning path
Introduction to Terraform
Terraform is an open-source Infrastructure as Code (IaC) tool that enables you to define, provision, and manage cloud infrastructure using declarative configuration files.
Code Examples
(2)$ terraform --version
$ terraform init
Infrastructure as Code Principles
IaC principles emphasize declarative configuration, version control, reproducibility, and treating infrastructure like application code.
Code Examples
(2)$ resource "aws_s3_bucket" "logs" {
bucket = var.bucket_name
tags = var.common_tags
}
$ terraform fmt -recursive
Terraform Core Workflow
The standard Terraform workflow follows init β validate β plan β apply β destroy pattern for safe infrastructure management.
Code Examples
(2)$ terraform init
terraform validate
terraform plan -out=plan.tfplan
terraform apply plan.tfplan
$ terraform plan -var='environment=prod' -var-file='prod.tfvars'
Terraform Providers
Providers are plugins that enable Terraform to interact with cloud platforms, SaaS providers, and other APIs through resource and data source implementations.
Code Examples
(2)$ terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
$ terraform providers lock -platform=linux_amd64 -platform=darwin_amd64
Terraform Modules
Modules are reusable, self-contained packages of Terraform configurations that encapsulate infrastructure patterns and promote code reuse.
Code Examples
(2)$ module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = var.vpc_name
cidr = var.vpc_cidr
}
$ output "vpc_id" {
value = module.vpc.vpc_id
}
State Management
Terraform state tracks resource mappings and metadata, enabling infrastructure lifecycle management and team collaboration through remote backends.
Code Examples
(2)$ terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "prod/terraform.tfstate"
region = "us-west-2"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
$ terraform state mv aws_instance.web aws_instance.web_server
Variables & Outputs
Variables provide input parameters for flexible configurations, while outputs expose computed values for consumption by other configurations or external systems.
Code Examples
(2)$ variable "environment" {
type = string
description = "Environment name"
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "Environment must be dev, staging, or prod."
}
}
$ output "load_balancer_dns" {
value = aws_lb.main.dns_name
description = "DNS name of the load balancer"
}
Terraform Workspaces
Workspaces enable managing multiple environments or configurations with the same Terraform code using separate state files.
Code Examples
(2)$ resource "aws_instance" "web" {
tags = {
Name = "web-${terraform.workspace}"
Environment = terraform.workspace
}
}
$ terraform workspace new staging && terraform workspace select staging
Data Sources
Data sources allow Terraform to fetch information about existing infrastructure and external systems without managing them.
Code Examples
(2)$ data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"]
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
}
$ data "aws_availability_zones" "available" {
state = "available"
}
Terraform Registry
The Terraform Registry is a public repository hosting thousands of providers and modules for infrastructure automation and reuse.
Code Examples
(2)$ module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = "my-vpc"
cidr = "10.0.0.0/16"
}
$ terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
Jenkins
10 concepts β’ 20 questions
What is Jenkins?
Jenkins is an open-source automation server that facilitates Continuous Integration and Continuous Deployment (CI/CD) pipelines. It automates the building, testing, and deployment of applications through configurable workflows called pipelines. Jenkins supports extensive plugin ecosystem, distributed builds, and integration with various tools and platforms. It enables teams to automate repetitive tasks, ensure code quality, and accelerate software delivery processes.
Learning Sections
Choose your learning path
Introduction to Jenkins
Jenkins is an open-source automation server that enables Continuous Integration and Continuous Delivery through automated build, test, and deployment pipelines.
Code Examples
(2)$ java -jar jenkins.war --httpPort=8080
$ curl -X GET http://localhost:8080/api/json
Jenkins Architecture
Master-agent architecture with controller orchestrating builds and agents executing workloads in distributed, scalable environments.
Code Examples
(2)$ agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:3.8.1-jdk-11
'''
}
}
$ agent { label 'linux && docker' }
Pipeline Types & Job Categories
Jenkins supports multiple job types from simple Freestyle projects to sophisticated Pipeline-as-Code implementations for different automation needs.
Code Examples
(2)$ pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'npm ci'
sh 'npm run build'
}
}
}
}
$ node {
stage('Checkout') {
checkout scm
}
stage('Build') {
sh 'make build'
}
}
Jenkinsfile Syntax & Structure
Jenkinsfiles define Pipeline-as-Code using Declarative (structured) or Scripted (Groovy-based) syntax for version-controlled build automation.
Code Examples
(2)$ pipeline {
agent any
stages {
stage('Test') {
steps {
sh 'go test ./...'
}
}
}
post {
always {
junit 'test-results.xml'
}
}
}
$ @Library('shared-library') _
pipeline {
agent any
stages {
stage('Deploy') {
steps {
deployToEnvironment('staging')
}
}
}
}
Pipeline Stages & Steps
Stages organize pipeline phases logically while steps execute specific actions, supporting parallel execution and conditional logic.
Code Examples
(2)$ stages {
stage('Build') {
steps {
sh 'mvn clean compile'
}
}
stage('Test') {
parallel {
stage('Unit Tests') {
steps {
sh 'mvn test'
}
}
stage('Integration Tests') {
steps {
sh 'mvn integration-test'
}
}
}
}
}
$ stage('Deploy') {
when {
branch 'main'
}
steps {
sh 'kubectl apply -f deployment.yaml'
}
}
Plugin Ecosystem & Management
Jenkins' extensive plugin ecosystem extends functionality for SCM, build tools, notifications, cloud platforms, and integrations while requiring careful management.
Code Examples
(2)$ # plugins.txt
git:latest
pipeline-stage-view:latest
blue-ocean:latest
docker-plugin:latest
kubernetes:latest
$ jenkins-plugin-cli --plugin-file plugins.txt --plugin-download-directory /var/jenkins_home/plugins
Build Triggers & Automation
Multiple trigger mechanisms enable automated builds through SCM changes, schedules, dependencies, and external events for comprehensive CI/CD automation.
Code Examples
(2)$ triggers {
pollSCM('H/15 * * * *')
cron('@daily')
upstream(upstreamProjects: 'upstream-job', threshold: hudson.model.Result.SUCCESS)
}
$ curl -X POST -H 'Authorization: Bearer TOKEN' http://jenkins/job/my-job/build
Credentials & Security Management
Centralized, encrypted credential storage with role-based access control, supporting multiple secret types and secure injection into build processes.
Code Examples
(2)$ withCredentials([
usernamePassword(credentialsId: 'docker-hub', usernameVariable: 'DOCKER_USER', passwordVariable: 'DOCKER_PASS'),
string(credentialsId: 'api-token', variable: 'API_TOKEN')
]) {
sh 'docker login -u $DOCKER_USER -p $DOCKER_PASS'
sh 'curl -H "Authorization: Bearer $API_TOKEN" api.example.com'
}
$ environment {
VAULT_ADDR = 'https://vault.company.com'
VAULT_NAMESPACE = 'jenkins'
}
Workspace & Artifact Management
Efficient workspace and artifact management ensures clean builds, proper storage, and reliable artifact distribution across pipeline stages.
Code Examples
(2)$ pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'make build'
archiveArtifacts artifacts: 'dist/**/*.jar', fingerprint: true
publishHTML([
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: 'reports',
reportFiles: 'index.html',
reportName: 'Build Report'
])
}
}
}
post {
always {
cleanWs()
}
}
}
$ stash includes: 'target/**/*.jar', name: 'build-artifacts'
// Later stage
unstash 'build-artifacts'
Distributed Builds & Scaling
Horizontal scaling through agent management, load balancing, and cloud-native architectures for high-throughput CI/CD operations.
Code Examples
(2)$ pipeline {
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:3.8.1-jdk-11
command:
- sleep
args:
- 99d
'''
}
}
stages {
stage('Build') {
steps {
container('maven') {
sh 'mvn clean package'
}
}
}
}
}
$ agent {
label 'linux && docker && high-memory'
}
Git / GitHub
12 concepts β’ 20 questions
What is Git / GitHub?
Git is a distributed version control system that tracks changes in source code during software development. It enables multiple developers to collaborate efficiently, maintains complete project history, and supports branching and merging workflows. Git provides features like commit tracking, branch management, conflict resolution, and remote repository synchronization. Understanding Git is essential for modern software development and DevOps practices.
Learning Sections
Choose your learning path
Introduction to Git & GitHub
Git is a distributed version control system, while GitHub is a cloud-based platform that hosts Git repositories and provides collaboration features.
Code Examples
(2)$ git --version
$ git clone https://github.com/user/repo.git
Version Control Fundamentals
Understanding repositories, working directory, staging area, and the three-tree architecture that makes Git powerful.
Code Examples
(2)$ git status --porcelain
$ git log --oneline --graph --all
Essential Git Commands
Master the fundamental Git commands for daily development workflow: initialization, staging, committing, and synchronization.
Code Examples
(2)$ git init
git add README.md
git commit -m 'Initial commit'
git remote add origin https://github.com/user/repo.git
git push -u origin main
$ git add -p
Branching & Branch Management
Git branches enable parallel development through lightweight, movable pointers that allow safe experimentation and feature isolation.
Code Examples
(2)$ git checkout -b feature/user-auth
git push -u origin feature/user-auth
$ git branch --merged | grep -v main | xargs git branch -d
Merging & Integration Strategies
Different merge strategies (merge commits, fast-forward, squash) provide various approaches to integrating branch changes.
Code Examples
(2)$ git merge feature/api
# If conflicts occur:
git status
# Edit conflicted files
git add .
git commit
$ git rebase -i HEAD~3
Remote Repository Operations
Managing connections to remote repositories through fetch, push, pull operations and understanding remote tracking branches.
Code Examples
(2)$ git remote add upstream https://github.com/original/repo.git
git fetch upstream
git merge upstream/main
$ git push --force-with-lease origin feature-branch
GitHub Collaboration Features
GitHub extends Git with collaboration tools: forks, pull requests, issues, and GitHub Actions for comprehensive project management.
Code Examples
(2)$ # .github/workflows/ci.yml
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: npm test
$ gh pr create --title 'Add new feature' --body 'Description of changes'
Git Stash & Temporary Storage
Git stash provides temporary storage for work-in-progress changes, enabling clean branch switching and emergency fixes.
Code Examples
(2)$ git stash push -m 'WIP: user authentication feature'
git checkout hotfix-branch
# Make hotfix
git checkout feature-branch
git stash pop
$ git stash show -p stash@{1}
.gitignore & File Management
Controlling which files Git tracks through .gitignore patterns, global ignores, and file management best practices.
Code Examples
(2)$ # .gitignore
# Dependencies
node_modules/
*.log
# Build outputs
dist/
build/
# Environment
.env
.env.local
# IDE
.vscode/
*.swp
$ git config --global core.excludesfile ~/.gitignore_global
Tagging & Release Management
Git tags mark specific commits for releases, milestones, and important points in project history with lightweight or annotated formats.
Code Examples
(2)$ git tag -a v1.2.0 -m 'Release 1.2.0: Added user authentication'
git push origin v1.2.0
$ git describe --tags --abbrev=0
Advanced Git Operations
Advanced Git features including interactive rebase, cherry-pick, reflog, and repository maintenance for complex workflows.
Code Examples
(2)$ git rebase -i HEAD~3
# In editor: change 'pick' to 'squash' for commits to combine
# Save and edit commit message
$ git bisect start
git bisect bad HEAD
git bisect good v1.0.0
# Test each commit Git presents
git bisect good/bad
git bisect reset
Git Configuration & Customization
Comprehensive Git configuration including user identity, aliases, hooks, and environment-specific settings for optimal workflow.
Code Examples
(2)$ # ~/.gitconfig
[user]
name = John Doe
email = john@example.com
[alias]
st = status -sb
co = checkout
br = branch
unstage = reset HEAD --
[includeIf "gitdir:~/work/"]
path = ~/.gitconfig-work
$ git config --global core.hooksPath ~/.githooks
GitOps
10 concepts β’ 20 questions
What is GitOps?
GitOps is a operational framework that uses Git repositories as the single source of truth for declarative infrastructure and application configuration. It applies Git workflows to operations, enabling automated deployments, rollbacks, and infrastructure management through pull requests and Git commits. GitOps promotes transparency, auditability, and consistency by treating infrastructure and application configurations as code, with Git serving as the control plane.
Learning Sections
Choose your learning path
Introduction to GitOps
GitOps is an operational framework that uses Git repositories as the single source of truth for declarative infrastructure and application configuration.
Code Examples
(2)$ git clone https://github.com/company/k8s-configs.git
cd k8s-configs
kubectl apply -f apps/production/
$ # .github/workflows/validate.yml
name: Validate Manifests
on: [push, pull_request]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: kubectl apply --dry-run=client -f manifests/
GitOps Core Principles
Four fundamental principles: declarative configuration, version control as source of truth, automated deployment, and continuous reconciliation.
Code Examples
(2)$ # Declarative Kubernetes manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web
image: nginx:1.21
$ git log --oneline --graph
git show <commit-hash>
git revert <commit-hash>
Core Principles
Declarative, versioned, automated sync, continuous reconciliation.
Code Examples
(1)$ kubectl apply -f manifests/ (performed by controller)
Argo CD - GitOps for Kubernetes
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes with a rich UI and multi-cluster support.
Code Examples
(2)$ apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: web-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/company/k8s-configs
targetRevision: HEAD
path: apps/web-app
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
$ argocd app create web-app \
--repo https://github.com/company/k8s-configs \
--path apps/web-app \
--dest-server https://kubernetes.default.svc \
--dest-namespace production
Flux CD - GitOps Toolkit
Flux is a modular GitOps toolkit providing source, kustomize, helm, and notification controllers for Kubernetes.
Code Examples
(2)$ flux create source git webapp \
--url=https://github.com/company/k8s-configs \
--branch=main \
--interval=1m
flux create kustomization webapp \
--target-namespace=production \
--source=webapp \
--path="./apps/webapp" \
--prune=true \
--interval=5m
$ apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: nginx
namespace: default
spec:
interval: 5m
chart:
spec:
chart: nginx
version: '1.x'
sourceRef:
kind: HelmRepository
name: bitnami
values:
replicaCount: 3
Pull-based vs Push-based Deployments
GitOps favors pull-based architecture where cluster agents pull configurations from Git, improving security and reliability over push-based CI/CD.
Code Examples
(2)$ # Pull-based reconciliation loop
while true; do
git fetch origin main
kubectl diff -f manifests/
kubectl apply -f manifests/
kubectl get events --sort-by='.lastTimestamp'
sleep 30
done
$ # Push-based deployment (traditional)
# In CI/CD pipeline:
kubectl config use-context production
kubectl apply -f manifests/
# vs Pull-based (GitOps)
# Agent in cluster:
flux reconcile kustomization webapp --with-source
GitOps Repository Structure
Organizing Git repositories for GitOps with proper separation of concerns, environment management, and application configurations.
Code Examples
(2)$ # GitOps Repository Structure
k8s-configs/
βββ apps/
β βββ web-app/
β β βββ base/
β β β βββ deployment.yaml
β β β βββ service.yaml
β β β βββ kustomization.yaml
β β βββ overlays/
β β βββ dev/
β β βββ staging/
β β βββ production/
β βββ api-service/
βββ infrastructure/
β βββ namespaces/
β βββ rbac/
β βββ monitoring/
βββ clusters/
βββ dev-cluster/
βββ staging-cluster/
βββ prod-cluster/
$ # Kustomization overlay example
# overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
replicas:
- name: web-app
count: 5
images:
- name: web-app
newTag: v1.2.3
Secrets Management in GitOps
Handling sensitive data in GitOps workflows through external secret managers, sealed secrets, and encryption strategies.
Code Examples
(2)$ # Sealed Secret example
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: db-credentials
namespace: production
spec:
encryptedData:
username: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEQAx...
password: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEQAx...
$ # External Secrets Operator
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: vault-secret
spec:
refreshInterval: 15s
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: db-credentials
data:
- secretKey: username
remoteRef:
key: database
property: username
Multi-Environment GitOps
Managing multiple environments (dev/staging/prod) in GitOps through branching strategies, directory structures, and promotion workflows.
Code Examples
(2)$ # Branch-per-environment workflow
git checkout dev
git add .
git commit -m 'Add new feature'
git push origin dev
# Promote to staging
git checkout staging
git merge dev
git push origin staging
# Promote to production
git checkout main
git merge staging
git push origin main
$ # Directory-per-environment with Kustomize
envs/
βββ base/
β βββ deployment.yaml
β βββ kustomization.yaml
βββ dev/
β βββ kustomization.yaml
β βββ replica-patch.yaml
βββ staging/
β βββ kustomization.yaml
β βββ resource-patch.yaml
βββ production/
βββ kustomization.yaml
βββ production-patch.yaml
GitOps Benefits & Challenges
Understanding the advantages of GitOps (auditability, rollbacks, security) and common challenges (secrets, complexity, tooling).
Code Examples
(2)$ # Audit trail example
git log --oneline --graph --all
* 2f8a9c1 (HEAD -> main) Update production replicas to 5
* 1a7b3d2 Add health check endpoint
* 9e4f6c8 Initial application deployment
# Rollback example
git revert 2f8a9c1
# This creates a new commit that undoes the replica change
$ # Measuring GitOps success
# Deployment frequency
git log --since='1 month ago' --oneline | wc -l
# Lead time (commit to deployment)
kubectl get applications -o json | jq '.items[] | {name: .metadata.name, lastSync: .status.operationState.finishedAt}'
# Change failure rate
kubectl get events --field-selector reason=Failed
GitHub Actions
11 concepts β’ 20 questions
What is GitHub Actions?
GitHub Actions is a CI/CD platform integrated directly into GitHub repositories that automates software workflows. It enables you to build, test, and deploy code directly from GitHub using customizable workflows triggered by repository events. GitHub Actions supports matrix builds, parallel jobs, marketplace actions, and seamless integration with GitHub's ecosystem. It simplifies DevOps automation by providing native CI/CD capabilities within the development platform.
Learning Sections
Choose your learning path
Workflow Structure & Architecture
Hierarchical structure: Workflows contain jobs, jobs contain steps, steps execute actions or commands.
Code Examples
(2)$ name: CI Pipeline
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm test
deploy:
needs: test
runs-on: ubuntu-latest
steps:
- run: npm run deploy
$ jobs:
build:
outputs:
version: ${{ steps.version.outputs.value }}
steps:
- id: version
run: echo "value=1.0.0" >> $GITHUB_OUTPUT
Workflow Files & Organization
YAML files in .github/workflows/ directory define automated processes with specific naming and organization patterns.
Code Examples
(2)$ .github/workflows/
βββ ci.yml # Continuous Integration
βββ deploy.yml # Deployment
βββ release.yml # Release Management
βββ security.yml # Security Scanning
$ name: "CI Pipeline"
on: [push, pull_request]
# Workflow content here
Triggers & Event Types
Comprehensive event system including repository events, scheduled triggers, manual dispatch, and external webhooks.
Code Examples
(2)$ on:
push:
branches: [main, develop]
paths: ['src/**', '!docs/**']
pull_request:
types: [opened, synchronize]
schedule:
- cron: '0 2 * * 1-5'
workflow_dispatch:
inputs:
environment:
description: 'Environment to deploy'
required: true
default: 'staging'
$ on:
release:
types: [published]
repository_dispatch:
types: [deploy-prod]
Runners & Execution Environment
Execution environments including GitHub-hosted runners, self-hosted runners, and container-based execution.
Code Examples
(2)$ jobs:
test:
runs-on: ubuntu-latest
build:
runs-on: [self-hosted, linux, large]
deploy:
runs-on: ubuntu-latest
container:
image: node:18
options: --cpus 2
$ runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
Actions & Marketplace
Pre-built, reusable actions from GitHub Marketplace and custom actions for workflow automation.
Code Examples
(2)$ steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '18'
cache: 'npm'
- uses: actions/upload-artifact@v4
with:
name: build-files
path: dist/
$ uses: ./.github/actions/custom-deploy
with:
environment: production
api-key: ${{ secrets.API_KEY }}
Environment Variables & Secrets Management
Secure configuration management through environment variables, encrypted secrets, and environment protection rules.
Code Examples
(2)$ env:
NODE_ENV: production
API_URL: https://api.example.com
jobs:
deploy:
environment: production
steps:
- run: |
echo "Deploying to $NODE_ENV"
curl -H "Authorization: Bearer ${{ secrets.API_TOKEN }}" $API_URL
$ permissions:
contents: read
issues: write
pull-requests: write
Artifacts & Caching Strategies
Performance optimization through intelligent caching and artifact management for faster builds and data sharing.
Code Examples
(2)$ - name: Cache dependencies
uses: actions/cache@v4
with:
path: |
~/.npm
~/.cache/pip
target/
key: ${{ runner.os }}-deps-${{ hashFiles('**/package-lock.json', '**/requirements.txt', '**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-deps-
${{ runner.os }}-
$ - uses: actions/upload-artifact@v4
with:
name: build-${{ github.sha }}
path: |
dist/
coverage/
retention-days: 30
Security & Best Practices
Security hardening, supply chain protection, and workflow best practices for safe automation.
Code Examples
(2)$ permissions:
contents: read
issues: write
pull-requests: write
id-token: write # for OIDC
steps:
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
- name: Configure AWS
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: us-east-1
$ - name: Validate input
env:
USER_INPUT: ${{ github.event.inputs.message }}
run: |
# Safe: using environment variable
echo "Processing: $USER_INPUT"
Matrix Builds & Parallel Execution
Parallel job execution across multiple configurations using matrix strategies for comprehensive testing.
Code Examples
(2)$ strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
node: [18, 20]
include:
- os: ubuntu-latest
node: 16
experimental: true
exclude:
- os: macos-latest
node: 18
$ runs-on: ${{ matrix.os }}
steps:
- run: echo "Testing on ${{ matrix.os }} with Node ${{ matrix.node }}"
Conditional Execution & Expressions
Dynamic workflow control using conditional expressions, contexts, and functions for intelligent automation.
Code Examples
(2)$ jobs:
deploy:
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
steps:
- name: Deploy to production
if: success()
run: echo "Deploying..."
- name: Cleanup on failure
if: failure()
run: echo "Cleaning up..."
$ if: |
contains(github.event.head_commit.message, '[skip ci]') == false &&
(startsWith(github.ref, 'refs/heads/feature/') || github.ref == 'refs/heads/develop')
Reusable Workflows & Composite Actions
Workflow reusability through callable workflows and composite actions for DRY principles and standardization.
Code Examples
(2)$ # .github/workflows/reusable-deploy.yml
on:
workflow_call:
inputs:
environment:
required: true
type: string
secrets:
api-key:
required: true
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- run: echo "Deploying to ${{ inputs.environment }}"
$ jobs:
production:
uses: ./.github/workflows/reusable-deploy.yml
with:
environment: production
secrets:
api-key: ${{ secrets.PROD_API_KEY }}