Interview
Ace your DevOps interviews with comprehensive Q&A covering all major topics
328 Interview Questions
11 Technology Areas
Updated Daily
Filter by Technology
Q1
CI/CD
CI/CD
What's the difference between Continuous Integration, Continuous Delivery, and Continuous Deployment?
cicdfundamentals
Answer
Answer
Explanation: Continuous Integration (CI) automatically builds, tests, and validates code changes when developers commit to shared repositories, catching integration issues early. Continuous Delivery (CD) extends CI by automatically preparing releases for deployment, ensuring code is always deployment-ready but requires manual approval for production. Continuous Deployment goes further by automatically deploying every validated change to production without human intervention. CI focuses on code quality, CD on release readiness, and Continuous Deployment on automated production releases.
DevOps Use: CI prevents integration hell and catches bugs early. CD ensures consistent, reliable releases with rollback capabilities. Continuous Deployment enables rapid feature delivery and reduces time-to-market for low-risk applications.
Q2
CI/CD
CI/CD
Describe a typical CI/CD pipeline (stages and purpose of each).
pipelinestagesworkflow
Answer
Answer
Explanation: A typical pipeline includes: 1) Source stage - triggers on code changes, checks out code, 2) Build stage - compiles code, resolves dependencies, creates artifacts, 3) Test stage - runs unit, integration, and security tests in parallel, 4) Package stage - creates deployable artifacts (Docker images, binaries), 5) Deploy to staging - deploys for integration testing, 6) Acceptance tests - runs end-to-end and performance tests, 7) Deploy to production - releases to users with monitoring. Each stage acts as a quality gate, failing fast to provide quick feedback.
DevOps Use: Stages provide clear separation of concerns, enable parallel execution, and create checkpoints for quality assurance. Failed stages prevent bad code from reaching production.
Q3
CI/CD
CI/CD
What events commonly trigger a pipeline (push, PR, tags, schedules, manual approvals) and why?
triggerseventsautomation
Answer
Answer
Explanation: Common triggers include: Push events (immediate feedback on code changes), Pull Requests (validate changes before merge), Tags (release deployments), Scheduled triggers (nightly builds, dependency updates), Manual triggers (hotfixes, production deployments), and Webhook events (external system integration). Each serves different purposes: push for rapid feedback, PR for quality gates, tags for releases, schedules for maintenance, manual for controlled deployments. Trigger selection affects development velocity and deployment safety.
DevOps Use: Push triggers enable fast feedback loops, PR triggers enforce code review, tag triggers automate releases, scheduled triggers handle maintenance tasks, manual triggers provide deployment control.
Q4
CI/CD
CI/CD
How do you design fast feedback in CI (test selection, parallelism, caching)?
performancetestingoptimization
Answer
Answer
Explanation: Fast feedback strategies include: Test selection (run only affected tests first, full suite later), Test parallelism (split tests across multiple runners), Smart caching (dependencies, build artifacts, test results), Fail-fast approach (stop on first critical failure), Test prioritization (critical tests first), and Incremental builds (build only changed components). Use test impact analysis to identify which tests to run based on code changes. Implement test sharding and matrix builds for parallel execution.
DevOps Use: Faster feedback reduces context switching for developers, enables more frequent commits, and improves development velocity while maintaining quality.
Q5
CI/CD
CI/CD
What's the role of runners/agents in CI systems (hosted vs self-hosted)?
infrastructurerunnersexecution
Answer
Answer
Explanation: Runners/agents are compute environments that execute pipeline jobs. Hosted runners (GitHub Actions, GitLab.com) provide managed infrastructure with pre-installed tools, automatic scaling, and maintenance-free operation. Self-hosted runners offer custom environments, specific hardware/software requirements, network access to internal resources, and cost control for high-volume usage. Runners can be ephemeral (created per job) or persistent (reused across jobs). Choose based on security, performance, cost, and customization needs.
DevOps Use: Hosted runners for standard workflows and quick setup. Self-hosted for custom environments, internal network access, GPU workloads, or cost optimization at scale.
Q6
CI/CD
CI/CD
How do you handle secrets safely in pipelines (env secrets, OIDC, vaults, masking)?
securitysecretsauthentication
Answer
Answer
Explanation: Safe secret handling includes: Environment secrets (encrypted storage in CI platform), OIDC/Workload Identity (keyless authentication using short-lived tokens), External vaults (HashiCorp Vault, AWS Secrets Manager), Secret masking (automatic redaction in logs), Least privilege access (minimal permissions), Secret rotation (regular updates), and Audit logging (track secret usage). Never hardcode secrets in code or logs. Use different secrets per environment and implement secret scanning in pipelines.
DevOps Use: Secure secret management prevents credential leakage, enables compliance, supports secret rotation, and provides audit trails for security incidents.
Q7
CI/CD
CI/CD
What is artifact management and why promote the same artifact across environments?
artifactsdeploymentconsistency
Answer
Answer
Explanation: Artifact management involves storing, versioning, and distributing build outputs (binaries, Docker images, packages) through registries or repositories. Promoting the same artifact across environments (devβstagingβprod) ensures consistency, eliminates build variations, and reduces deployment risks. Artifacts should be immutable, tagged with versions, and include metadata (build info, dependencies, security scan results). Use artifact promotion rather than rebuilding for each environment.
DevOps Use: Consistent artifacts eliminate "works on my machine" issues, enable reliable rollbacks, support compliance auditing, and reduce deployment time by avoiding rebuilds.
Q8
CI/CD
CI/CD
How do you speed up slow pipelines (dependency caching, build cache keys, matrix builds)?
performancecachingoptimization
Answer
Answer
Explanation: Pipeline optimization techniques include: Dependency caching (cache node_modules, Maven dependencies with proper cache keys), Build caching (incremental builds, Docker layer caching), Parallel execution (matrix builds, job parallelism), Resource optimization (appropriate runner sizes), Pipeline splitting (separate fast/slow tests), and Selective execution (path-based triggers, changed file detection). Use cache keys based on dependency files (package-lock.json, pom.xml) and implement cache invalidation strategies.
DevOps Use: Faster pipelines improve developer productivity, enable more frequent deployments, reduce infrastructure costs, and provide quicker feedback on issues.
Q9
CI/CD
CI/CD
How do you make pipelines reproducible and deterministic (pinning, lockfiles, containerized builds)?
reproducibilityconsistencysecurity
Answer
Answer
Explanation: Reproducible builds require: Dependency pinning (exact versions in lockfiles like package-lock.json, Pipfile.lock), Containerized builds (consistent runtime environments), Tool version pinning (Node.js, Java versions), Environment standardization (same OS, libraries), Deterministic timestamps (SOURCE_DATE_EPOCH), and Isolated builds (clean environments per build). Use Docker for consistent build environments and implement checksum verification for dependencies.
DevOps Use: Reproducible builds enable reliable debugging, consistent deployments, security auditing, and compliance with supply chain security requirements.
Q10
CI/CD
CI/CD
What branch strategy works best for CI (trunk-based vs long-lived feature branches) and why?
branchingintegrationworkflow
Answer
Answer
Explanation: Trunk-based development works best for CI, with developers committing frequently to main branch using feature flags for incomplete features. This enables continuous integration, reduces merge conflicts, and provides fast feedback. Long-lived feature branches create integration delays, complex merges, and delayed feedback. Short-lived feature branches (1-2 days) offer a compromise. Trunk-based requires good testing, feature flags, and team discipline but enables true continuous integration.
DevOps Use: Trunk-based development enables faster delivery, reduces integration risks, simplifies CI/CD pipelines, and improves team collaboration through frequent integration.
Q11
CI/CD
CI/CD
How do you gate merges with quality checks (required status checks, code coverage, Sonar quality gates)?
qualitygatestesting
Answer
Answer
Explanation: Quality gates include: Required status checks (all CI checks must pass), Code coverage thresholds (minimum percentage required), Static analysis gates (SonarQube quality gates), Security scans (vulnerability thresholds), and Performance benchmarks (regression detection). Configure branch protection rules to enforce these checks before merging. Implement different thresholds for different branches (stricter for main) and provide clear feedback on failures.
DevOps Use: Quality gates maintain code quality, prevent technical debt accumulation, ensure security standards, and provide consistent quality metrics across teams.
Q12
CI/CD
CI/CD
How do you deal with flaky tests and keep the main branch green?
testingreliabilitymaintenance
Answer
Answer
Explanation: Flaky test management includes: Test quarantine (isolate flaky tests), Retry mechanisms (automatic retries with limits), Root cause analysis (identify timing, environment, or dependency issues), Test stability monitoring (track failure rates), Parallel test execution (reduce resource contention), and Environment standardization (consistent test conditions). Implement test result analytics to identify patterns and prioritize fixes. Never ignore flaky tests - they erode confidence in CI.
DevOps Use: Stable tests maintain developer confidence, reduce false positives, enable reliable automated deployments, and improve overall development velocity.
Q13
CI/CD
CI/CD
What is pipeline-as-code and why is it preferred over UI-configured jobs?
infrastructurecodeautomation
Answer
Answer
Explanation: Pipeline-as-code defines CI/CD workflows in version-controlled files (YAML, JSON) stored alongside application code. Benefits include: Version control (track changes, rollback), Code review (peer review of pipeline changes), Reproducibility (consistent across environments), Documentation (self-documenting workflows), and Portability (easy migration between systems). Examples include GitHub Actions workflows, GitLab CI YAML, Jenkins Pipelines, and Azure DevOps YAML pipelines.
DevOps Use: Pipeline-as-code enables GitOps workflows, improves collaboration, reduces configuration drift, and provides audit trails for compliance.
Q14
CI/CD
CI/CD
How do you structure multi-project/monorepo pipelines (path filters, workspaces, matrices)?
monorepooptimizationarchitecture
Answer
Answer
Explanation: Monorepo pipeline strategies include: Path-based triggers (run pipelines only for changed paths), Workspace detection (identify affected projects), Matrix builds (parallel execution per project), Dependency graphs (build in correct order), Selective testing (test only affected components), and Shared pipeline templates (reusable workflows). Use tools like Nx, Lerna, or Bazel for dependency management and selective builds. Implement change detection to optimize build times.
DevOps Use: Efficient monorepo pipelines reduce build times, optimize resource usage, maintain project isolation, and enable independent deployments while sharing common infrastructure.
Q15
CI/CD
CI/CD
How do you implement blue-green vs canary deploymentsβwhen do you choose each?
deploymentstrategiesreliability
Answer
Answer
Explanation: Blue-green deployment maintains two identical environments, switching traffic instantly between them for zero-downtime deployments. Canary deployment gradually shifts traffic from old to new version, monitoring metrics before full rollout. Blue-green offers instant rollback and full testing but requires double infrastructure. Canary reduces risk through gradual rollout but requires traffic splitting capabilities. Choose blue-green for critical systems needing instant rollback, canary for gradual risk mitigation.
DevOps Use: Blue-green for database-heavy applications, regulated environments. Canary for microservices, user-facing applications where gradual rollout provides better risk management.
Q16
CI/CD
CI/CD
How do you implement safe rollbacks (versioned artifacts, traffic switching, DB rollback strategy)?
rollbackreliabilityrecovery
Answer
Answer
Explanation: Safe rollback strategies include: Versioned artifacts (immutable, tagged releases), Database migration strategies (backward-compatible changes, separate schema/data migrations), Traffic switching (load balancer configuration), Feature flag rollbacks (instant disable), and State management (stateless applications, external state stores). Implement rollback testing, automated rollback triggers based on metrics, and clear rollback procedures. Plan rollbacks during deployment design.
DevOps Use: Safe rollbacks reduce MTTR (Mean Time To Recovery), minimize user impact during incidents, enable confident deployments, and support rapid iteration.
Q17
CI/CD
CI/CD
How do you manage environment promotions (dev β test β staging β prod) and approvals?
environmentsapprovalsgovernance
Answer
Answer
Explanation: Environment promotion involves: Automated promotion triggers (successful tests, approvals), Approval workflows (manual gates for production), Environment-specific configurations (secrets, scaling), Promotion criteria (quality gates, security scans), and Audit trails (who deployed what when). Use GitOps for configuration management, implement approval matrices based on risk, and maintain environment parity. Automate lower environments, add manual gates for production.
DevOps Use: Controlled promotions ensure quality, maintain compliance, provide audit trails, and balance automation with governance requirements.
Q18
CI/CD
CI/CD
What's the difference between build, release, and deploy stages?
stagesworkflowseparation
Answer
Answer
Explanation: Build stage compiles source code, resolves dependencies, and creates artifacts (binaries, containers). Release stage packages artifacts with environment-specific configurations, creates release candidates, and prepares for deployment. Deploy stage takes release artifacts and deploys them to target environments, configures services, and validates deployment. Build is environment-agnostic, release adds environment context, deploy executes the actual deployment. This separation enables artifact reuse and environment-specific customization.
DevOps Use: Stage separation enables parallel workflows, artifact reuse across environments, independent scaling of each stage, and clear responsibility boundaries.
Q19
CI/CD
CI/CD
How do you run database migrations in CD without downtime (migrate-first, backward-compatible changes)?
databasemigrationsavailability
Answer
Answer
Explanation: Zero-downtime migration strategies include: Backward-compatible changes (additive migrations first), Expand-contract pattern (add new, migrate data, remove old), Blue-green database migrations (separate database instances), Online schema changes (tools like gh-ost, pt-online-schema-change), and Feature flags (gradual migration activation). Always separate schema changes from data changes, test migrations on production-like data, and implement rollback procedures.
DevOps Use: Zero-downtime migrations maintain service availability, reduce deployment risks, enable continuous deployment, and support rapid iteration.
Q20
CI/CD
CI/CD
How do feature flags complement CI/CD and reduce deployment risk?
feature-flagsdeploymentrisk-management
Answer
Answer
Explanation: Feature flags decouple deployment from release, enabling: Trunk-based development (merge incomplete features safely), Gradual rollouts (percentage-based user exposure), A/B testing (compare feature variants), Instant rollbacks (disable features without deployment), and Environment-specific features (different features per environment). Implement feature flag management systems, monitoring, and cleanup processes for technical debt prevention.
DevOps Use: Feature flags enable continuous deployment, reduce blast radius of issues, support experimentation, and provide operational control over feature exposure.
Q21
CI/CD
CI/CD
What security controls do you add to CI/CD (SAST, dependency scanning, container scanning, DAST)?
securityscanningcompliance
Answer
Answer
Explanation: CI/CD security controls include: SAST (Static Application Security Testing) for code vulnerabilities, Dependency scanning for vulnerable libraries, Container scanning for image vulnerabilities, DAST (Dynamic Application Security Testing) for runtime issues, Secret scanning for exposed credentials, and License compliance checking. Implement security gates that fail builds on high-severity issues, provide developer feedback, and integrate with security tools like Snyk, SonarQube, or cloud-native scanners.
DevOps Use: Shift-left security catches vulnerabilities early, reduces remediation costs, ensures compliance, and maintains security posture throughout development lifecycle.
Q22
CI/CD
CI/CD
How do you prevent secret leakage in logs and images (masking, scanning, commit policies)?
securitysecretsprevention
Answer
Answer
Explanation: Secret leakage prevention includes: Automatic log masking (redact secrets in CI logs), Pre-commit hooks (scan commits for secrets), Image scanning (detect secrets in container layers), Environment variable protection (secure injection methods), and Audit logging (track secret access). Use tools like git-secrets, truffleHog, or cloud-native secret scanners. Implement secret rotation, least-privilege access, and incident response procedures for exposed secrets.
DevOps Use: Prevent credential exposure, maintain compliance, reduce security incidents, and protect against supply chain attacks through comprehensive secret protection.
Q23
CI/CD
CI/CD
What is software supply-chain security and what does SLSA/provenance mean for CI?
securitysupply-chaincompliance
Answer
Answer
Explanation: Software supply-chain security protects against attacks on development and deployment processes. SLSA (Supply-chain Levels for Software Artifacts) provides framework for securing software supply chains through build integrity, source integrity, and dependency management. Provenance tracks artifact origins, build processes, and dependencies. Implement signed commits, verified builds, dependency pinning, and artifact attestation. Use tools like Sigstore, in-toto, or cloud-native supply chain security features.
DevOps Use: Supply chain security prevents malicious code injection, ensures build integrity, enables compliance with security standards, and provides audit trails for incident response.
Q24
CI/CD
CI/CD
How do you isolate and secure CI runners (ephemeral runners, least privilege, network egress controls)?
securityisolationinfrastructure
Answer
Answer
Explanation: Runner security includes: Ephemeral runners (fresh environment per job), Least privilege access (minimal permissions), Network segmentation (restricted egress), Container isolation (sandboxed execution), and Resource limits (prevent resource exhaustion). Use dedicated runner pools for sensitive workloads, implement network policies, monitor runner activity, and regularly update runner images. Consider using managed runners for better security posture.
DevOps Use: Secure runners prevent lateral movement, protect sensitive data, ensure build integrity, and maintain compliance with security policies.
Q25
CI/CD
CI/CD
How do you cache dependencies correctly (keys, restore-keys, invalidation pitfalls)?
cachingperformanceoptimization
Answer
Answer
Explanation: Effective dependency caching requires: Precise cache keys (hash of lockfiles like package-lock.json), Restore keys (fallback for partial matches), Cache invalidation (when dependencies change), Scope management (per-branch or global), and Storage optimization (cache size limits). Common pitfalls include overly broad cache keys, missing invalidation triggers, and cache pollution. Implement cache warming strategies and monitor cache hit rates for optimization.
DevOps Use: Proper caching reduces build times, lowers infrastructure costs, improves developer experience, and enables faster feedback loops.
Q26
CI/CD
CI/CD
How do you store and retain build artifacts and logs (TTL, retention policies, compliance)?
storageretentioncompliance
Answer
Answer
Explanation: Artifact retention strategies include: Time-based retention (TTL policies), Size-based limits (storage quotas), Compliance requirements (regulatory retention), Tiered storage (hot/cold/archive), and Selective retention (keep releases, clean builds). Implement automated cleanup, cost optimization through storage tiers, and legal hold capabilities. Consider artifact signing, metadata preservation, and disaster recovery requirements.
DevOps Use: Balanced retention policies manage storage costs, ensure compliance, enable debugging and rollbacks, and maintain audit trails for security incidents.
Q27
CI/CD
CI/CD
How do you handle multi-env configuration (templating, parameters, per-env secrets, GitOps)?
configurationenvironmentsgitops
Answer
Answer
Explanation: Multi-environment configuration strategies include: Configuration templating (Helm, Kustomize), Parameter injection (environment-specific values), Secret management (per-environment secrets), GitOps workflows (git-based configuration), and Environment promotion (configuration versioning). Use tools like ArgoCD, Flux, or cloud-native configuration management. Implement configuration validation, drift detection, and rollback capabilities.
DevOps Use: Consistent configuration management reduces deployment errors, enables environment parity, supports compliance requirements, and provides audit trails.
Q28
CI/CD
CI/CD
How do you observe and debug failing pipelines (artifacts, logs, test reports, traceability to commits)?
debuggingobservabilitytroubleshooting
Answer
Answer
Explanation: Pipeline observability includes: Comprehensive logging (structured logs, log aggregation), Artifact preservation (build outputs, test reports), Traceability (link failures to commits/PRs), Metrics collection (build times, success rates), and Alerting (failure notifications). Implement log retention policies, searchable log storage, and correlation between pipeline events and code changes. Use tools like ELK stack, Grafana, or cloud-native observability platforms.
DevOps Use: Effective debugging reduces MTTR, improves developer productivity, enables root cause analysis, and supports continuous improvement of pipeline reliability.
Q29
CI/CD
CI/CD
Compare common CI/CD tools (GitHub Actions, GitLab CI, Jenkins)βwhen would you pick each?
toolscomparisonselection
Answer
Answer
Explanation: GitHub Actions excels in GitHub ecosystem integration, marketplace actions, and ease of use. GitLab CI provides integrated DevOps platform, built-in security scanning, and Kubernetes integration. Jenkins offers maximum flexibility, extensive plugin ecosystem, and on-premises control. Choose GitHub Actions for GitHub-centric workflows, GitLab CI for integrated DevOps platform, Jenkins for complex enterprise requirements or on-premises needs. Consider factors like existing toolchain, team expertise, and specific requirements.
DevOps Use: Tool selection affects team productivity, maintenance overhead, integration capabilities, and long-term scalability of CI/CD processes.
Q30
CI/CD
CI/CD
What practices keep CI fast as teams scale (selective workflows on paths, concurrent jobs, reusable workflows)?
scalingperformanceoptimization
Answer
Answer
Explanation: Scaling CI practices include: Path-based workflow triggers (run only relevant pipelines), Concurrent job execution (parallel processing), Reusable workflows (shared pipeline templates), Selective testing (test impact analysis), Resource optimization (right-sized runners), and Pipeline splitting (separate fast/slow feedback loops). Implement workflow orchestration, dependency management, and resource pooling. Use monorepo tools for large codebases and implement intelligent build systems.
DevOps Use: Scalable CI practices maintain fast feedback loops, optimize resource utilization, reduce infrastructure costs, and support growing development teams.
Q31
DevOps Fundamentals
DevOps Fundamentals
What is DevOps, and how does it differ from Agile?
devopsagilefundamentals
Answer
Answer
Explanation: DevOps is a cultural and technical practice that combines software development and IT operations to enable faster, more reliable software delivery through automation, collaboration, and continuous improvement. While Agile focuses on iterative development and customer collaboration, DevOps extends beyond development to include deployment, monitoring, and operations. Agile addresses 'how to build software,' DevOps addresses 'how to deliver and maintain software.'
DevOps Use: DevOps enables end-to-end automation from code commit to production deployment, while Agile provides the development methodology that feeds into DevOps pipelines.
Q32
DevOps Fundamentals
DevOps Fundamentals
What are the main goals of DevOps?
devopsgoalsobjectives
Answer
Answer
Explanation: The main goals of DevOps are: faster time-to-market through automated pipelines, improved software quality through continuous testing, enhanced collaboration between development and operations teams, increased deployment frequency with reduced failure rates, faster recovery from failures, and better customer satisfaction through reliable software delivery. DevOps aims to break down silos and create shared responsibility.
DevOps Use: These goals drive organizational transformation, tool selection, process improvements, and cultural changes to achieve competitive advantage through superior software delivery.
Q33
DevOps Fundamentals
DevOps Fundamentals
What is the DevOps lifecycle, and why is it important?
devopslifecycleprocess
Answer
Answer
Explanation: The DevOps lifecycle is a continuous cycle including: Plan (requirements and design), Code (development), Build (compilation and packaging), Test (automated testing), Release (deployment preparation), Deploy (production deployment), Operate (monitoring and maintenance), and Monitor (feedback collection). This cycle emphasizes continuous integration, delivery, feedback, and improvement.
DevOps Use: The lifecycle provides a framework for implementing DevOps practices, ensuring all phases are automated and integrated for seamless software delivery.
Q34
DevOps Fundamentals
DevOps Fundamentals
What is the difference between continuous integration, continuous delivery, and continuous deployment in concept?
devopsci-cdconcepts
Answer
Answer
Explanation: Continuous Integration (CI) automatically integrates code changes from multiple developers, running automated tests to detect conflicts early. Continuous Delivery (CD) extends CI by automatically preparing code for release, ensuring it's always deployable but requires manual approval for production. Continuous Deployment automatically deploys every change that passes tests directly to production without manual intervention.
DevOps Use: CI prevents integration problems, CD ensures release readiness, and Continuous Deployment enables rapid feature delivery with minimal manual overhead.
Q35
DevOps Fundamentals
DevOps Fundamentals
What is meant by 'shift-left' in DevOps, and why is it important?
devopsshift-leftquality
Answer
Answer
Explanation: 'Shift-left' means moving activities like testing, security, and quality checks earlier in the development lifecycle, closer to the coding phase. Instead of testing at the end, you test during development. This includes unit testing, security scanning, code reviews, and performance testing integrated into the development process, catching issues when they're cheaper and easier to fix.
DevOps Use: Shift-left reduces bug fix costs, improves software quality, accelerates delivery, and prevents security vulnerabilities from reaching production.
Q36
DevOps Fundamentals
DevOps Fundamentals
How does DevOps improve collaboration between development and operations teams?
devopscollaborationculture
Answer
Answer
Explanation: DevOps improves collaboration through: shared goals and metrics, cross-functional teams, shared tools and platforms, joint responsibility for production, blameless post-mortems, regular communication, and shared knowledge. It breaks down traditional silos by creating shared ownership of the entire application lifecycle from development to production support.
DevOps Use: Better collaboration reduces handoff delays, improves problem resolution, increases knowledge sharing, and creates more reliable software delivery.
Q37
DevOps Fundamentals
DevOps Fundamentals
What are the benefits of version control in a DevOps culture?
devopsversion-controlcollaboration
Answer
Answer
Explanation: Version control in DevOps provides: complete change history and audit trails, collaboration through branching and merging, rollback capabilities for quick recovery, integration with CI/CD pipelines, infrastructure as code versioning, and compliance documentation. It serves as the foundation for automation, enabling reproducible builds and deployments.
DevOps Use: Version control enables automated deployments, change tracking, team collaboration, and serves as the single source of truth for all code and configuration.
Q38
DevOps Fundamentals
DevOps Fundamentals
Explain 'blameless culture' and why it matters.
devopscultureblameless
Answer
Answer
Explanation: Blameless culture focuses on learning from failures rather than assigning blame to individuals. When incidents occur, teams analyze system failures, process gaps, and improvement opportunities without punishing people. This encourages honest reporting, knowledge sharing, and continuous improvement. The goal is fixing systems and processes, not finding scapegoats.
DevOps Use: Blameless culture enables faster incident resolution, better learning from failures, increased innovation, and psychological safety for teams to experiment and improve.
Q39
DevOps Fundamentals
DevOps Fundamentals
What is continuous feedback, and how does it impact team performance?
devopsfeedbackimprovement
Answer
Answer
Explanation: Continuous feedback is the ongoing collection and sharing of information about system performance, user experience, code quality, and team processes. It includes automated monitoring, user feedback, performance metrics, and team retrospectives. This creates rapid learning cycles and enables quick adjustments to improve outcomes.
DevOps Use: Continuous feedback enables data-driven decisions, faster problem detection, improved user satisfaction, and continuous team and system improvement.
Q40
DevOps Fundamentals
DevOps Fundamentals
What is continuous improvement in DevOps, and give an example of how it's implemented?
devopsimprovementprocess
Answer
Answer
Explanation: Continuous improvement is the ongoing effort to enhance processes, tools, and practices based on feedback and metrics. Example: A team notices deployment failures are increasing, so they implement automated testing, improve monitoring, conduct retrospectives, and gradually reduce failure rates. It's about making small, incremental improvements consistently.
DevOps Use: Continuous improvement drives efficiency gains, quality improvements, reduced costs, and better team satisfaction through systematic enhancement of practices.
Q41
DevOps Fundamentals
DevOps Fundamentals
How does DevOps reduce failure rates and recovery time for software deployments?
devopsreliabilityrecovery
Answer
Answer
Explanation: DevOps reduces failures through: automated testing catching bugs early, smaller, frequent deployments reducing risk, infrastructure as code ensuring consistency, monitoring providing early detection, and automated rollback capabilities. Recovery time improves through automation, better monitoring, documented procedures, and practiced incident response.
DevOps Use: Lower failure rates and faster recovery improve system reliability, user experience, and business continuity while reducing operational stress.
Q42
DevOps Fundamentals
DevOps Fundamentals
What is a 'single source of truth,' and why is it important in DevOps workflows?
devopsconsistencyautomation
Answer
Answer
Explanation: Single source of truth means having one authoritative location for each piece of information - code in version control, configuration in repositories, documentation in wikis, and metrics in monitoring systems. This prevents conflicts, ensures consistency, and enables automation by providing reliable, up-to-date information that everyone can trust.
DevOps Use: Single source of truth enables reliable automation, reduces configuration drift, improves collaboration, and ensures consistent deployments across environments.
Q43
DevOps Fundamentals
DevOps Fundamentals
How do DevOps practices improve business outcomes and customer satisfaction?
devopsbusinessvalue
Answer
Answer
Explanation: DevOps improves business outcomes through: faster feature delivery increasing competitive advantage, higher quality reducing customer issues, better reliability improving user experience, reduced costs through automation, and faster problem resolution maintaining customer trust. These technical improvements directly translate to business value.
DevOps Use: DevOps practices enable rapid response to market changes, improved customer experience, reduced operational costs, and increased revenue through faster innovation.
Q44
DevOps Fundamentals
DevOps Fundamentals
What are common misconceptions about DevOps?
devopsmisconceptionsculture
Answer
Answer
Explanation: Common misconceptions include: DevOps is just tools (it's primarily culture), DevOps eliminates operations roles (it transforms them), DevOps means no processes (it requires disciplined processes), DevOps is only for startups (enterprises benefit greatly), and DevOps means developers do operations (it means collaboration, not role elimination).
DevOps Use: Understanding misconceptions helps organizations avoid common pitfalls and focus on cultural transformation alongside technical improvements.
Q45
DevOps Fundamentals
DevOps Fundamentals
Explain the difference between 'infrastructure as code' and traditional manual infrastructure management.
devopsiacautomation
Answer
Answer
Explanation: Infrastructure as Code (IaC) defines infrastructure using code files that can be version controlled, tested, and automated. Traditional manual management involves clicking through consoles, running commands manually, and maintaining documentation separately. IaC provides consistency, repeatability, version control, and automation, while manual management is error-prone, inconsistent, and difficult to scale.
DevOps Use: IaC enables rapid environment provisioning, consistent configurations, disaster recovery, and infrastructure changes through standard development workflows.
Q46
DevOps Fundamentals
DevOps Fundamentals
What is the difference between deployment strategies: blue-green vs. canary vs. rolling?
devopsdeploymentstrategies
Answer
Answer
Explanation: Blue-Green uses two identical environments, switching traffic instantly between them. Canary deploys to a small subset of users first, gradually increasing traffic if successful. Rolling deployment gradually replaces instances one by one. Blue-Green offers instant rollback, Canary provides risk mitigation, Rolling minimizes resource usage but slower rollback.
DevOps Use: Choose strategy based on risk tolerance, resource availability, rollback requirements, and application architecture to balance speed, safety, and cost.
Q47
DevOps Fundamentals
DevOps Fundamentals
How would you explain the concept of 'automation' to a non-technical stakeholder?
devopsautomationexplanation
Answer
Answer
Explanation: Automation means having computers do repetitive tasks that humans normally do manually, like testing software, deploying applications, or monitoring systems. It's like setting up a factory assembly line - once configured, it runs consistently without human intervention, reducing errors, saving time, and freeing people to focus on more valuable creative work.
DevOps Use: Automation reduces manual errors, increases consistency, enables faster delivery, and allows teams to focus on innovation rather than repetitive tasks.
Q48
DevOps Fundamentals
DevOps Fundamentals
What is the risk of not having version-controlled infrastructure, and how does DevOps mitigate it?
devopsinfrastructurerisks
Answer
Answer
Explanation: Risks include: configuration drift between environments, inability to recreate infrastructure, no audit trail of changes, difficulty rolling back problematic changes, and knowledge silos when team members leave. DevOps mitigates this through Infrastructure as Code, version control systems, automated deployments, and documented change processes.
DevOps Use: Version-controlled infrastructure enables consistent environments, audit compliance, disaster recovery, and collaborative infrastructure management.
Q49
DevOps Fundamentals
DevOps Fundamentals
How would you handle a situation where developers and operations disagree on a deployment schedule?
devopscollaborationconflict-resolution
Answer
Answer
Explanation: Address through: understanding each team's concerns (developers want features released, operations want stability), finding common ground through shared metrics, implementing risk mitigation strategies like feature flags or gradual rollouts, establishing clear criteria for deployment readiness, and creating collaborative decision-making processes that consider both perspectives.
DevOps Use: Collaborative conflict resolution strengthens team relationships, improves decision quality, and creates sustainable deployment practices that balance speed and stability.
Q50
DevOps Fundamentals
DevOps Fundamentals
Why is monitoring and logging important even if automated pipelines exist?
devopsmonitoringobservability
Answer
Answer
Explanation: Automated pipelines can deploy code successfully but can't guarantee runtime behavior, user experience, or business impact. Monitoring and logging provide visibility into: application performance, user behavior, system health, security events, and business metrics. They enable proactive issue detection, performance optimization, and understanding of real-world system behavior.
DevOps Use: Monitoring and logging enable observability, incident response, performance optimization, and continuous improvement based on production data and user feedback.
Q51
DevOps Fundamentals
DevOps Fundamentals
How does DevOps bridge the gap between software development and operations?
devopscollaborationintegration
Answer
Answer
Explanation: DevOps bridges the gap through: shared tools and platforms, cross-functional teams, joint responsibility for production, common metrics and goals, collaborative practices like pair programming and shared on-call duties, and cultural changes that emphasize cooperation over competition. It creates shared ownership of the entire application lifecycle.
DevOps Use: Bridging the gap reduces handoff delays, improves communication, increases system reliability, and creates more efficient software delivery processes.
Q52
DevOps Fundamentals
DevOps Fundamentals
How does a DevOps culture encourage learning from failures?
devopsculturelearning
Answer
Answer
Explanation: DevOps culture encourages learning through: blameless post-mortems focusing on system improvements, celebrating failures as learning opportunities, sharing knowledge across teams, implementing changes to prevent recurrence, and creating psychological safety for experimentation. Failures become valuable data for improvement rather than sources of punishment.
DevOps Use: Learning from failures improves system reliability, team knowledge, innovation capacity, and creates more resilient systems and processes.
Q53
DevOps Fundamentals
DevOps Fundamentals
Explain a scenario where DevOps principles could prevent downtime.
devopsreliabilityprevention
Answer
Answer
Explanation: Scenario: A critical database update needs deployment. DevOps principles prevent downtime through: automated testing catching issues early, blue-green deployment enabling instant rollback, monitoring detecting problems immediately, infrastructure as code ensuring consistent environments, and practiced incident response procedures. Without DevOps, manual processes increase risk of errors and longer recovery times.
DevOps Use: DevOps practices minimize downtime through automation, testing, monitoring, and rapid recovery capabilities, maintaining business continuity and user experience.
Q54
DevOps Fundamentals
DevOps Fundamentals
What is the difference between DevOps and Site Reliability Engineering (SRE)?
devopssremethodology
Answer
Answer
Explanation: DevOps is a cultural movement emphasizing collaboration between development and operations teams. SRE is Google's implementation of DevOps principles, focusing on applying software engineering practices to operations problems. SRE emphasizes reliability through error budgets, SLOs, and treating operations as a software problem. DevOps is broader cultural change, SRE is specific implementation methodology.
DevOps Use: Both aim to improve software delivery and reliability, but SRE provides specific practices and metrics for achieving DevOps goals in large-scale systems.
Q55
DevOps Fundamentals
DevOps Fundamentals
How do you prioritize tasks in a DevOps team with multiple releases and urgent fixes?
devopsprioritizationmanagement
Answer
Answer
Explanation: Prioritize based on: business impact and customer effect, security and compliance requirements, system stability and reliability, dependencies and blockers, and resource availability. Use frameworks like MoSCoW (Must, Should, Could, Won't) or impact/effort matrices. Maintain clear communication with stakeholders and regularly reassess priorities as situations change.
DevOps Use: Effective prioritization ensures critical issues are addressed first, resources are used efficiently, and business objectives are met while maintaining system stability.
Q56
DevOps Fundamentals
DevOps Fundamentals
What is DevSecOps, and how does integrating security early save costs?
devopssecuritydevsecops
Answer
Answer
Explanation: DevSecOps integrates security practices throughout the development lifecycle rather than as a final gate. Early integration saves costs because: security issues are cheaper to fix during development than production, automated security testing prevents vulnerabilities from reaching production, security as code enables consistent security policies, and early detection reduces compliance and breach risks.
DevOps Use: DevSecOps enables secure-by-default applications, automated compliance checking, and reduced security incidents without slowing development velocity.
Q57
DevOps Fundamentals
DevOps Fundamentals
Why is it risky to make manual changes in production outside DevOps processes?
devopsproductionrisk-management
Answer
Answer
Explanation: Manual production changes are risky because: no audit trail of what changed, configuration drift from other environments, potential for human error, difficulty reproducing changes, no testing or validation, and inability to rollback easily. These changes bypass safety mechanisms like testing, code review, and automated deployment processes.
DevOps Use: DevOps processes provide safety nets through automation, testing, version control, and rollback capabilities that manual changes bypass.
Q58
DevOps Fundamentals
DevOps Fundamentals
How would you convince a team to adopt DevOps practices if they are resistant?
devopsadoptionchange-management
Answer
Answer
Explanation: Convince through: demonstrating quick wins with small improvements, showing concrete benefits like reduced deployment time or fewer bugs, addressing specific pain points they experience, providing training and support, starting with willing team members, and measuring and sharing success metrics. Focus on solving their problems rather than imposing practices.
DevOps Use: Successful DevOps adoption requires buy-in from teams, which comes through demonstrating value and addressing concerns rather than mandating changes.
Q59
DevOps Fundamentals
DevOps Fundamentals
What are the top three best practices for a junior DevOps engineer starting in a team?
devopsjuniorbest-practices
Answer
Answer
Explanation: Top three practices: 1) Learn the existing systems and processes before suggesting changes - understand current state and reasons behind decisions. 2) Automate small, repetitive tasks first to build confidence and demonstrate value. 3) Focus on collaboration and communication - DevOps is as much about people as technology, so build relationships and ask questions.
DevOps Use: These practices help junior engineers contribute effectively while learning, building trust with team members, and developing both technical and soft skills.
Q60
DevOps Fundamentals
DevOps Fundamentals
Explain a scenario where automation could fail and how you would mitigate it.
devopsautomationrisk-mitigation
Answer
Answer
Explanation: Scenario: Automated deployment pipeline deploys faulty code due to test failure. Mitigation strategies: implement multiple testing layers (unit, integration, smoke tests), use gradual rollout strategies like canary deployments, implement automated rollback triggers, maintain comprehensive monitoring and alerting, and have manual override capabilities for emergencies.
DevOps Use: Automation failure mitigation ensures system reliability while maintaining the benefits of automation through layered safety mechanisms and rapid recovery procedures.
Q61
Docker
Docker
What is Docker, and how is it different from a virtual machine?
dockerfundamentalsvirtualization
Answer
Answer
Explanation: Docker is a containerization platform that packages applications with their dependencies into lightweight, portable containers. Unlike VMs that virtualize entire operating systems with hypervisors, Docker containers share the host OS kernel while maintaining process isolation. VMs require separate OS instances (heavy resource usage), while containers use OS-level virtualization (minimal overhead). Docker provides faster startup times, better resource efficiency, and easier deployment compared to VMs.
DevOps Use: Docker enables consistent deployments across environments, microservices architecture, CI/CD automation, and efficient resource utilization in cloud and on-premises infrastructure.
Q62
Docker
Docker
What's the difference between a Docker image and a container?
dockerimagescontainers
Answer
Answer
Explanation: A Docker image is a read-only template containing application code, runtime, libraries, and dependencies - essentially a blueprint for creating containers. A container is a running instance of an image with its own writable layer, process space, and network interface. Images are immutable and can be shared/stored in registries. Containers are ephemeral, stateful, and can be started, stopped, or deleted. One image can spawn multiple containers.
DevOps Use: Images enable consistent deployments and version control. Containers provide isolated runtime environments for applications, enabling horizontal scaling and microservices architecture.
Q63
Docker
Docker
What is a Dockerfile, and what do common instructions (FROM, RUN, COPY, CMD, ENTRYPOINT, ARG, ENV) do?
dockerdockerfilebuild
Answer
Answer
Explanation: A Dockerfile is a text file containing instructions to build Docker images. Key instructions: FROM (base image), RUN (execute commands during build), COPY (copy files from build context), CMD (default command when container starts), ENTRYPOINT (fixed command that always runs), ARG (build-time variables), ENV (environment variables). Each instruction creates a new layer. Order matters for caching - put frequently changing instructions last.
DevOps Use: Dockerfiles enable Infrastructure as Code for container images, version control of build processes, automated image creation in CI/CD pipelines, and reproducible builds.
Q64
Docker
Docker
CMD vs ENTRYPOINT β how do they differ, and how do you override them at runtime?
dockercmdentrypoint
Answer
Answer
Explanation: CMD provides default arguments that can be completely overridden by docker run arguments. ENTRYPOINT sets a fixed command that always executes, with docker run arguments appended as parameters. CMD is replaceable, ENTRYPOINT is not. Best practice: use ENTRYPOINT for the main command and CMD for default arguments. Override CMD with 'docker run image new-command', override ENTRYPOINT with '--entrypoint' flag.
DevOps Use: ENTRYPOINT ensures consistent application startup in different environments. CMD provides flexibility for different execution modes (dev vs prod configurations).
Q65
Docker
Docker
COPY vs ADD β when should you use each, and why?
dockercopyadd
Answer
Answer
Explanation: COPY simply copies files/directories from build context to image. ADD has additional features: automatic tar extraction, URL downloads, and decompression. COPY is preferred for transparency and predictability. Use ADD only when you need its special features (extracting tars, downloading from URLs). COPY is more explicit about what's happening, making Dockerfiles easier to understand and debug.
DevOps Use: COPY for standard file operations in CI/CD builds. ADD for downloading dependencies or extracting archives during image creation, though external downloads can make builds non-reproducible.
Q66
Docker
Docker
What is the build context and .dockerignore, and why do they matter for speed and security?
dockerbuild-contextdockerignore
Answer
Answer
Explanation: Build context is the directory sent to Docker daemon during build, containing all files available to COPY/ADD instructions. Large contexts slow builds and increase security risks. .dockerignore excludes files from build context (like .gitignore), reducing context size and preventing sensitive files from being included. Common exclusions: .git/, node_modules/, *.log, secrets, and temporary files.
DevOps Use: Optimized build contexts speed up CI/CD pipelines, reduce network transfer, and prevent accidental inclusion of secrets or unnecessary files in images.
Q67
Docker
Docker
How does the Docker layer cache work, and how can you structure a Dockerfile to maximize cache hits?
dockercachingoptimization
Answer
Answer
Explanation: Docker caches each instruction as a separate layer. If an instruction and its context haven't changed, Docker reuses the cached layer. Cache invalidation occurs when an instruction or its inputs change, invalidating all subsequent layers. Optimization strategies: order instructions by change frequency (dependencies first, code last), use specific COPY commands, leverage multi-stage builds, and use .dockerignore to exclude cache-busting files.
DevOps Use: Effective caching dramatically reduces build times in CI/CD pipelines, saves bandwidth, and improves developer productivity through faster local builds.
Q68
Docker
Docker
What are multi-stage builds, and how do they reduce image size?
dockermulti-stageoptimization
Answer
Answer
Explanation: Multi-stage builds use multiple FROM statements in a single Dockerfile, allowing you to copy artifacts between stages while discarding unnecessary build tools and dependencies. Common pattern: build stage (includes compilers, build tools) and runtime stage (only runtime dependencies and compiled artifacts). This dramatically reduces final image size by excluding build-time dependencies from production images.
DevOps Use: Smaller images reduce deployment time, storage costs, attack surface, and network transfer. Essential for production deployments and container registries.
Q69
Docker
Docker
How do you build and tag an image (including multi-arch with buildx)?
dockerbuildmulti-arch
Answer
Answer
Explanation: Basic build: 'docker build -t myapp:v1.0 .' Tags identify images with name:version format. Multi-architecture builds use Docker Buildx: 'docker buildx build --platform linux/amd64,linux/arm64 -t myapp:v1.0 --push .' This creates images for multiple CPU architectures (x86, ARM) in a single command. Use semantic versioning for tags and 'latest' tag for current stable version.
DevOps Use: Multi-arch builds support diverse deployment targets (x86 servers, ARM cloud instances, Apple Silicon). Proper tagging enables version management and rollback strategies.
Q70
Docker
Docker
How do you run a container with port mappings (-p) and what's the difference between EXPOSE and publishing ports?
dockernetworkingports
Answer
Answer
Explanation: Port mapping with -p flag: 'docker run -p 8080:80 nginx' maps host port 8080 to container port 80. EXPOSE instruction documents which ports the container listens on but doesn't publish them. Publishing (-p) actually makes ports accessible from host. EXPOSE is documentation/metadata, -p creates actual network routing. Use -P to publish all exposed ports to random host ports.
DevOps Use: Port mapping enables external access to containerized services, load balancer configuration, and service discovery in orchestration platforms.
Q71
Docker
Docker
How do you list, stop, remove containers and images (docker ps, stop, rm, image rm)?
dockerclimanagement
Answer
Answer
Explanation: Container management: 'docker ps' (running containers), 'docker ps -a' (all containers), 'docker stop ' (graceful stop), 'docker kill ' (force stop), 'docker rm ' (remove container). Image management: 'docker images' (list images), 'docker rmi ' (remove image), 'docker image prune' (remove unused images). Use container names or IDs for operations.
DevOps Use: Essential for container lifecycle management, cleanup automation, resource management, and troubleshooting in development and production environments.
Q72
Docker
Docker
What's the difference between volumes, bind mounts, and tmpfs, and when do you use each?
dockerstoragevolumes
Answer
Answer
Explanation: Volumes are Docker-managed storage, persisted in Docker's directory, best for data persistence. Bind mounts map host directories to containers, useful for development and configuration files. tmpfs mounts store data in host memory, perfect for temporary/sensitive data. Volumes offer better portability and backup options. Bind mounts provide direct host access. tmpfs ensures data never touches disk.
DevOps Use: Volumes for database storage and persistent data. Bind mounts for configuration files and development workflows. tmpfs for temporary files and security-sensitive data.
Q73
Docker
Docker
Explain Docker network drivers (bridge, host, none). When do containers share networks and DNS?
dockernetworkingdns
Answer
Answer
Explanation: Bridge (default) creates isolated network with internal DNS, containers communicate via container names. Host removes network isolation, container uses host's network directly. None disables networking completely. Custom bridge networks provide better isolation and DNS resolution. Containers on same network can communicate using container names as hostnames. Docker provides built-in DNS server for service discovery.
DevOps Use: Bridge networks for microservices communication, host networking for performance-critical applications, custom networks for service isolation and orchestration.
Q74
Docker
Docker
How do you publish and troubleshoot ports (host vs container port, conflicts)?
dockernetworkingtroubleshooting
Answer
Answer
Explanation: Port publishing maps host ports to container ports: 'docker run -p 8080:80' maps host 8080 to container 80. Common issues: port conflicts (host port already in use), firewall blocking, wrong port mapping, container not listening on expected port. Troubleshooting: check with 'netstat', 'docker port ', test with 'curl', verify application logs, and ensure container process binds to 0.0.0.0, not localhost.
DevOps Use: Port management is crucial for service accessibility, load balancer configuration, and avoiding conflicts in multi-service deployments.
Q75
Docker
Docker
What is Docker Compose? How do you define multi-service apps and use up, down, --build, and scaling?
dockercomposeorchestration
Answer
Answer
Explanation: Docker Compose orchestrates multi-container applications using YAML configuration files. Define services, networks, and volumes in docker-compose.yml. Commands: 'docker-compose up' (start services), 'docker-compose down' (stop and remove), 'docker-compose up --build' (rebuild images), 'docker-compose up --scale web=3' (scale services). Compose handles service dependencies, networking, and volume management automatically.
DevOps Use: Compose simplifies local development environments, testing setups, and small-scale deployments. Essential for microservices development and integration testing.
Q76
Docker
Docker
How do you wire dependencies in Compose (depends_on, healthchecks, env_file, profiles)?
dockercomposedependencies
Answer
Answer
Explanation: depends_on controls startup order but doesn't wait for readiness. healthcheck defines container health status. Use depends_on with condition: service_healthy for true dependency management. env_file loads environment variables from files. profiles group services for different environments (dev, test, prod). Combine these for robust service orchestration with proper startup sequencing and environment management.
DevOps Use: Proper dependency management ensures services start in correct order, health checks enable reliable deployments, and profiles support multiple environments with single compose file.
Q77
Docker
Docker
How do you integrate Docker in CI/CD (build, cache, push, scan) and avoid cache busting in pipelines?
dockercicdautomation
Answer
Answer
Explanation: CI/CD Docker integration: build images in pipeline, use registry caching (docker buildx with cache-from/cache-to), push to registries, scan for vulnerabilities. Avoid cache busting by: using specific COPY commands, leveraging .dockerignore, implementing proper layer ordering, and using external cache storage. Use multi-stage builds and BuildKit for advanced caching strategies.
DevOps Use: Automated image building, security scanning, registry management, and deployment automation. Proper caching reduces build times and infrastructure costs.
Q78
Docker
Docker
What's the workflow for registries: docker login, push/pull, and private repos (Hub/GHCR/ECR/ACR)?
dockerregistrydistribution
Answer
Answer
Explanation: Registry workflow: 1) docker login (authenticate), 2) docker tag (prepare image), 3) docker push (upload), 4) docker pull (download). Private registries require authentication and proper naming conventions. Docker Hub uses docker.io, GHCR uses ghcr.io, ECR uses AWS account URLs, ACR uses Azure URLs. Each has specific authentication methods (tokens, IAM roles, service principals).
DevOps Use: Centralized image storage, version management, access control, and distribution across environments. Essential for production deployments and team collaboration.
Q79
Docker
Docker
Tags vs digests β why pin images by digest in production?
dockersecurityversioning
Answer
Answer
Explanation: Tags are mutable labels (latest, v1.0) that can point to different images over time. Digests are immutable SHA256 hashes that uniquely identify specific image content. In production, pin by digest (image@sha256:abc123) to ensure exact same image is deployed, preventing supply chain attacks and unexpected changes. Tags are convenient for development, digests provide security and reproducibility.
DevOps Use: Digest pinning ensures deployment consistency, prevents supply chain attacks, enables compliance auditing, and provides exact rollback capabilities.
Q80
Docker
Docker
What are container restart policies (always, on-failure, unless-stopped) and when do you use them?
dockerreliabilitypolicies
Answer
Answer
Explanation: Restart policies control container behavior after exit: 'always' restarts regardless of exit code, 'on-failure' restarts only on non-zero exit, 'unless-stopped' restarts unless manually stopped, 'no' never restarts. Use 'always' for critical services, 'on-failure' for applications that might exit cleanly, 'unless-stopped' for services that should survive reboots but respect manual stops.
DevOps Use: Automatic service recovery, high availability, graceful handling of application failures, and system maintenance scenarios.
Q81
Docker
Docker
How do you view and manage logs (docker logs) and choose logging drivers (json-file, local, etc.)?
dockerloggingobservability
Answer
Answer
Explanation: View logs with 'docker logs ' (supports -f for follow, --tail for recent entries). Logging drivers control where logs go: json-file (default, local files), local (optimized local storage), syslog (system logger), journald (systemd), fluentd (log aggregation). Configure globally in daemon.json or per-container with --log-driver. Consider log rotation, retention, and centralized logging for production.
DevOps Use: Centralized logging, log aggregation, troubleshooting, monitoring, and compliance requirements. Essential for production observability.
Q82
Docker
Docker
How do you set resource limits (--memory, --cpus) and what happens on OOM?
dockerresourceslimits
Answer
Answer
Explanation: Set resource limits with --memory (RAM limit), --cpus (CPU limit), --memory-swap (swap control). When container exceeds memory limit, Linux OOM killer terminates processes, usually the main application process. Container exits with code 137 (SIGKILL). Use --oom-kill-disable to prevent OOM killing (risky). Monitor resource usage with 'docker stats'. Set appropriate limits based on application requirements and available resources.
DevOps Use: Resource management, preventing resource starvation, capacity planning, and ensuring fair resource distribution in multi-tenant environments.
Q83
Docker
Docker
What is the HEALTHCHECK instruction and how do platforms/orchestrators use it?
dockerhealthmonitoring
Answer
Answer
Explanation: HEALTHCHECK instruction defines how to test container health: 'HEALTHCHECK CMD curl -f http://localhost:8080/health'. Returns 0 (healthy), 1 (unhealthy), or 2 (reserved). Docker tracks health status over time. Orchestrators (Kubernetes, Docker Swarm, ECS) use health checks for: rolling deployments, load balancer registration, automatic restarts, and traffic routing decisions.
DevOps Use: Automated health monitoring, zero-downtime deployments, service mesh integration, and reliable service discovery in orchestrated environments.
Q84
Docker
Docker
How do containers shut down gracefully (SIGTERM/SIGKILL, STOPSIGNAL, PID 1, --init/tini)?
dockershutdownsignals
Answer
Answer
Explanation: Docker sends SIGTERM to PID 1, waits for grace period (default 10s), then sends SIGKILL. Applications must handle SIGTERM for graceful shutdown. PID 1 issues: doesn't forward signals, doesn't reap zombie processes. Solutions: use --init flag, tini init system, or proper signal handling in application. STOPSIGNAL instruction changes default signal. Ensure main process runs as PID 1 and handles signals correctly.
DevOps Use: Graceful shutdowns prevent data loss, ensure proper cleanup, maintain service availability during deployments, and support zero-downtime updates.
Q85
Docker
Docker
Why should containers avoid running as root? How do USER, user-namespace remap, and rootless mode improve safety?
dockersecurityprivileges
Answer
Answer
Explanation: Running as root increases security risks: container escape affects host, privilege escalation attacks, and broader attack surface. USER instruction creates non-root user in container. User namespace remapping maps container root to unprivileged host user. Rootless mode runs Docker daemon as non-root user. These techniques implement defense-in-depth, limiting blast radius of security breaches.
DevOps Use: Security hardening, compliance requirements, multi-tenant environments, and reducing attack surface in production deployments.
Q86
Docker
Docker
How do you choose minimal/secure base images and shrink images (clean package caches, multi-stage, avoid extra layers)?
dockeroptimizationsecurity
Answer
Answer
Explanation: Choose minimal base images: Alpine Linux (small), distroless (no shell/package manager), scratch (empty). Optimization techniques: multi-stage builds, clean package caches in same RUN command, combine RUN instructions, use .dockerignore, remove unnecessary files. Security: use official images, scan for vulnerabilities, keep base images updated, avoid adding unnecessary packages.
DevOps Use: Smaller images reduce deployment time, storage costs, attack surface, and network transfer. Critical for production efficiency and security.
Q87
Docker
Docker
How do you handle secrets safely (BuildKit --mount=type=secret, SSH mounts) vs ARG/ENV?
dockersecuritysecrets
Answer
Answer
Explanation: Never use ARG/ENV for secrets - they're visible in image layers and docker inspect. Safe methods: BuildKit secret mounts (--mount=type=secret), SSH mounts for private repos, external secret management (Vault, cloud services), runtime secret injection. Secret mounts don't persist in image layers. Use multi-stage builds to separate secret access from final image.
DevOps Use: Secure secret handling prevents credential exposure, enables compliance, supports secret rotation, and maintains security in CI/CD pipelines.
Q88
Docker
Docker
How do you scan images and generate an SBOM (e.g., Docker Scout), and act on vulnerabilities?
dockersecurityscanning
Answer
Answer
Explanation: Image scanning detects vulnerabilities in base images and dependencies. Tools: Docker Scout, Trivy, Snyk, cloud-native scanners. SBOM (Software Bill of Materials) catalogs all components and dependencies. Workflow: scan during build, fail on high-severity issues, generate SBOM for compliance, track vulnerabilities over time. Act on findings: update base images, patch dependencies, implement compensating controls.
DevOps Use: Shift-left security, compliance requirements, supply chain security, and vulnerability management in production environments.
Q89
Docker
Docker
What are best practices for caching in CI (buildx cache import/export, ordering COPY/RUN)?
dockercicaching
Answer
Answer
Explanation: CI caching best practices: use registry cache with buildx (--cache-from/--cache-to), order Dockerfile instructions by change frequency, use specific COPY paths, implement proper .dockerignore, leverage multi-stage builds. Cache strategies: inline cache (stored with image), registry cache (separate cache images), local cache (CI runner storage). Optimize layer ordering: dependencies first, source code last.
DevOps Use: Faster CI/CD pipelines, reduced infrastructure costs, improved developer productivity, and efficient resource utilization.
Q90
Docker
Docker
Compose vs Kubernetes β when is Compose enough, and what are its limits compared to k8s?
dockerorchestrationcomparison
Answer
Answer
Explanation: Compose is suitable for: development environments, simple deployments, single-host applications, and testing. Kubernetes provides: multi-host orchestration, auto-scaling, service discovery, rolling updates, self-healing, and enterprise features. Compose limits: single host, basic networking, limited scaling, no built-in load balancing. Choose Compose for simplicity, Kubernetes for production scale and complexity.
DevOps Use: Compose for local development and simple deployments. Kubernetes for production, microservices, multi-environment deployments, and enterprise requirements.
Q91
Git / GitHub
Git / GitHub
What is Git used for?
gitbasicsversion-control
Answer
Answer
Explanation: Git is a distributed version control system that tracks changes to code, enabling multiple developers to collaborate efficiently. It manages file versions, supports branching for parallel development, and maintains a history of changes for auditing and rollbacks. In DevOps, Git is critical for managing application code, infrastructure as code (e.g., Terraform, Ansible), and integrating with CI/CD pipelines like Jenkins or GitHub Actions to automate builds, tests, and deployments.
DevOps Use: Git triggers CI/CD workflows, versions configuration scripts, and enables rollbacks for faulty deployments, ensuring reliable software delivery.
Q92
Git / GitHub
Git / GitHub
Explain the structure of a Git repository and what the .git folder contains
gitrepositorystructure
Answer
Answer
Explanation: A Git repository consists of your working directory (project files) and the .git folder containing all version control data. The .git folder includes: objects/ (stores commits, trees, blobs), refs/ (branch and tag pointers), HEAD (current branch pointer), config (repository settings), hooks/ (custom scripts), and index (staging area). This structure enables Git's distributed nature, allowing complete history storage locally.
DevOps Use: Understanding .git structure helps troubleshoot repository issues, configure hooks for automated testing, and optimize repository performance in CI/CD environments.
Q93
Git / GitHub
Git / GitHub
What are the three states of files in Git?
gitworkflowstates
Answer
Answer
Explanation: Git has three main states: Modified (changes made but not staged), Staged (changes marked for next commit via git add), and Committed (changes safely stored in .git database). This three-stage workflow provides control over what gets committed. Files move through: Working Directory β Staging Area β Git Repository. The staging area (index) allows selective commits and reviewing changes before finalizing.
DevOps Use: Staging area enables atomic commits, separating feature changes from bug fixes, and creating clean commit history for better CI/CD pipeline tracking.
Q94
Git / GitHub
Git / GitHub
What are different Git branching strategies and when to use them?
gitbranchingstrategy
Answer
Answer
Explanation: Common strategies include: Git Flow (feature/develop/release/hotfix branches for complex releases), GitHub Flow (simple feature branches to main), GitLab Flow (environment branches for deployment stages), and Trunk-based (short-lived branches, frequent integration). Each suits different team sizes and release cycles. Git Flow works for scheduled releases, GitHub Flow for continuous deployment, GitLab Flow for staged environments.
DevOps Use: Branching strategy affects CI/CD pipeline design, deployment automation, and release management. Choose based on team size, release frequency, and environment complexity.
Q95
Git / GitHub
Git / GitHub
Explain merge vs rebase in detail, including when and why to use each
gitmergerebasehistory
Answer
Answer
Explanation: Merge combines branches by creating a merge commit, preserving the complete history and branch structure. Rebase replays commits from one branch onto another, creating a linear history by rewriting commit hashes. Merge shows true development timeline with parallel work; rebase creates cleaner, linear history. Interactive rebase allows squashing, editing, and reordering commits. Never rebase shared/public branches as it rewrites history.
DevOps Use: Use merge for feature integration to preserve context, rebase for cleaning local history before pushing, and interactive rebase for preparing clean commits for code review.
Q96
Git / GitHub
Git / GitHub
How do you handle merge conflicts and what causes them?
gitconflictsresolution
Answer
Answer
Explanation: Merge conflicts occur when Git cannot automatically merge changes affecting the same lines or nearby areas. Common causes: simultaneous edits to same lines, one branch modifies while another deletes, binary file conflicts. Resolution process: 1) Git marks conflicts with <<<<<<< HEAD, =======, >>>>>>> markers, 2) Edit files to resolve conflicts, 3) Remove conflict markers, 4) Stage resolved files with git add, 5) Complete merge with git commit. Use merge tools like vimdiff, meld, or IDE integration for complex conflicts.
DevOps Use: Automated conflict detection in CI/CD pipelines, establishing merge conflict resolution procedures, and using merge tools for infrastructure code conflicts.
Q97
Git / GitHub
Git / GitHub
Explain Git remote operations: fetch, pull, push, and their differences
gitremoteoperations
Answer
Answer
Explanation: Remote operations sync local and remote repositories. Fetch downloads commits, branches, and tags from remote without merging (git fetch origin). Pull combines fetch + merge in one command (git pull = git fetch + git merge). Push uploads local commits to remote repository (git push origin branch). Fetch is safer for reviewing changes first; pull is convenient for direct integration. Push requires write access and handles conflicts by rejecting if remote has newer commits.
DevOps Use: Fetch for checking CI/CD pipeline updates, pull for getting latest infrastructure changes, push for deploying code changes, and managing multiple remotes for different environments.
Q98
Git / GitHub
Git / GitHub
What is Git stash and how do you use it effectively?
gitstashworkflow
Answer
Answer
Explanation: Git stash temporarily saves uncommitted changes (modified and staged files) allowing you to switch branches or pull updates without committing incomplete work. Commands: git stash (save changes), git stash pop (apply and remove latest stash), git stash apply (apply without removing), git stash list (show all stashes), git stash drop (delete stash). Stashes are stored in a stack (LIFO). Can stash specific files with git stash push -m "message" file.txt.
DevOps Use: Quickly switch between feature branches, apply hotfixes without losing current work, and temporarily store configuration changes during deployments.
Q99
Git / GitHub
Git / GitHub
What's the difference between git reset, git revert, and git checkout?
gitresetrevertcheckout
Answer
Answer
Explanation: Reset moves branch pointer and optionally modifies staging/working directory: --soft (move pointer only), --mixed (default, reset staging), --hard (reset everything, dangerous). Revert creates new commit that undoes previous commit, safe for shared history. Checkout switches branches or restores files from specific commits without changing branch pointers. Reset rewrites history (dangerous on shared branches), revert preserves history (safe for collaboration).
DevOps Use: Use revert for undoing deployed changes safely, reset for cleaning local commits before pushing, checkout for investigating specific versions during debugging.
Q100
Git / GitHub
Git / GitHub
What is the difference between Git and GitHub?
gitgithubplatform
Answer
Answer
Explanation: Git is the distributed version control system - a command-line tool that tracks changes locally on your machine. GitHub is a cloud-based hosting platform that provides Git repository hosting plus collaboration features: pull requests, issues, wikis, project management, GitHub Actions for CI/CD, and social coding features. Git works offline and is the core technology; GitHub adds web interface, team collaboration, and DevOps automation on top of Git.
DevOps Use: Git handles local version control and branching; GitHub provides centralized repository hosting, automated CI/CD pipelines, team collaboration workflows, and integration with deployment tools.
Q101
Git / GitHub
Git / GitHub
How do you clone a repository and what does it include?
gitclonerepository
Answer
Answer
Explanation: Git clone creates a complete local copy of a remote repository using `git clone [directory]`. It downloads all files, complete commit history, all branches (though only main/master is checked out), tags, and configuration. Sets up 'origin' as default remote pointing to source repository. Supports HTTPS (username/password or token) and SSH (key-based) protocols. Options include --depth for shallow clones, --branch for specific branch, --single-branch to limit branches.
DevOps Use: Clone infrastructure repositories for local development, download CI/CD configurations, get application code for containerization, and set up development environments.
Q102
Git / GitHub
Git / GitHub
What is a pull request and how does it fit into a CI/CD workflow?
githubpull-requestcicd
Answer
Answer
Explanation: A pull request (PR) is a GitHub feature that proposes merging changes from one branch into another, enabling code review, discussion, and automated testing before integration. PRs include diff visualization, inline comments, approval workflows, and status checks from CI/CD systems. They enforce quality gates by requiring reviews, passing tests, and meeting branch protection rules before merging.
DevOps Use: PRs trigger automated CI/CD pipelines for testing, security scanning, and deployment previews. They integrate with tools like Jenkins, GitHub Actions, and SonarQube for comprehensive quality checks.
Q103
Git / GitHub
Git / GitHub
How do you resolve merge conflicts in GitHub?
githubconflictsmerge
Answer
Answer
Explanation: GitHub merge conflicts occur when automatic merging fails due to competing changes. GitHub's web interface shows conflict markers and allows simple resolution for basic conflicts. For complex conflicts: 1) Pull both branches locally, 2) Merge locally and resolve conflicts manually, 3) Push resolved changes. GitHub also provides conflict resolution tools in PRs, showing side-by-side diffs and allowing direct editing. Use 'Resolve conflicts' button for web-based resolution.
DevOps Use: Establish conflict resolution procedures in team workflows, use automated conflict detection in CI/CD pipelines, and implement merge strategies that minimize conflicts.
Q104
Git / GitHub
Git / GitHub
What is the purpose of .gitignore and how do you configure it?
gitgitignoreconfiguration
Answer
Answer
Explanation: .gitignore specifies files and directories Git should ignore, preventing them from being tracked or committed. Common ignores: build artifacts (dist/, build/), dependencies (node_modules/, vendor/), IDE files (.vscode/, .idea/), OS files (.DS_Store, Thumbs.db), logs (*.log), and secrets (.env, *.key). Patterns use wildcards (*), directories (folder/), and negation (!important.log). Global .gitignore affects all repositories.
DevOps Use: Exclude build outputs, temporary files, secrets, and environment-specific configurations from version control, keeping repositories clean and secure.
Q105
Git / GitHub
Git / GitHub
Explain the difference between git fetch, git pull, and git rebase.
gitfetchpullrebase
Answer
Answer
Explanation: Fetch downloads remote changes without merging (git fetch origin), updating remote-tracking branches but not your working branch. Pull combines fetch + merge (git pull = git fetch + git merge origin/branch), automatically merging remote changes into current branch. Rebase replays your commits on top of updated remote branch (git pull --rebase), creating linear history without merge commits. Each serves different workflow needs.
DevOps Use: Fetch for reviewing changes before integration, pull for quick updates in feature branches, rebase for maintaining clean commit history in shared branches.
Q106
Git / GitHub
Git / GitHub
How do you manage access control and permissions in GitHub repositories?
githubaccess-controlsecurity
Answer
Answer
Explanation: GitHub provides multiple access control levels: Repository permissions (read, write, admin), organization roles (member, owner), team-based access, and branch protection rules. Personal access tokens (PATs) and SSH keys handle authentication. Enterprise features include SAML SSO, LDAP integration, and audit logs. Branch protection enforces required reviews, status checks, and restricts force pushes. Deploy keys provide read-only access for specific repositories.
DevOps Use: Implement least-privilege access, use service accounts for CI/CD, rotate tokens regularly, and audit access permissions for compliance requirements.
Q107
Git / GitHub
Git / GitHub
What are GitHub Actions and how do they support automation?
githubactionsautomation
Answer
Answer
Explanation: GitHub Actions is a CI/CD platform that automates workflows triggered by repository events (push, PR, release). Workflows are defined in YAML files (.github/workflows/) containing jobs that run on GitHub-hosted or self-hosted runners. Actions are reusable units of code that perform specific tasks. Supports matrix builds, conditional execution, artifacts, caching, and integration with external services. Marketplace provides thousands of pre-built actions.
DevOps Use: Automate testing, building, deployment, security scanning, dependency updates, and infrastructure provisioning across multiple environments and platforms.
Q108
Git / GitHub
Git / GitHub
How do you securely manage secrets in GitHub Actions workflows?
githubsecretssecurity
Answer
Answer
Explanation: GitHub provides encrypted secrets storage at repository, environment, and organization levels. Secrets are injected as environment variables during workflow execution and are masked in logs. Types include repository secrets (general use), environment secrets (deployment-specific), and organization secrets (shared across repos). Use GITHUB_TOKEN for repository operations, external secrets for third-party services. Never hardcode secrets in workflow files.
DevOps Use: Store API keys, deployment credentials, database passwords, and certificates securely for automated deployments and integrations.
Q109
Git / GitHub
Git / GitHub
What is the role of Git tags in release management?
gittagsreleases
Answer
Answer
Explanation: Git tags mark specific commits as significant points in history, typically for releases. Lightweight tags are simple pointers; annotated tags store metadata (tagger, date, message). Semantic versioning (v1.2.3) is common for release tags. Tags enable reproducible builds, rollbacks to specific versions, and changelog generation. GitHub Releases build on tags, adding release notes, binaries, and deployment automation.
DevOps Use: Automate deployments based on tag creation, generate changelogs from tag history, create release artifacts, and implement version-based rollback strategies.
Q110
Git / GitHub
Git / GitHub
How do GitHub webhooks work and how are they used in automation pipelines?
githubwebhooksautomation
Answer
Answer
Explanation: GitHub webhooks are HTTP callbacks that notify external systems when repository events occur (push, PR, release, etc.). Webhooks send JSON payloads to configured URLs, enabling real-time integration with external tools. Common events: push (code changes), pull_request (PR actions), release (version tags), issues (bug reports). Webhooks can trigger Jenkins builds, Slack notifications, deployment pipelines, or custom automation scripts.
DevOps Use: Trigger CI/CD pipelines in external systems, send notifications to chat platforms, update project management tools, and synchronize with monitoring systems.
Q111
Git / GitHub
Git / GitHub
How do you enforce code quality using GitHub integrations like linters or SonarQube?
githubcode-qualityintegrations
Answer
Answer
Explanation: GitHub integrates with code quality tools through status checks, GitHub Apps, and Actions. Tools like SonarQube, CodeClimate, ESLint, and Prettier can automatically analyze code in PRs and block merging if quality gates fail. Integration methods: GitHub Actions workflows, third-party apps, webhooks, and API status checks. Branch protection rules enforce required status checks before merging.
DevOps Use: Automate code review processes, maintain coding standards, detect security vulnerabilities, and ensure consistent code formatting across teams.
Q112
Git / GitHub
Git / GitHub
Describe a strategy for versioning and changelog generation using GitHub workflows.
githubversioningchangelog
Answer
Answer
Explanation: Implement semantic versioning (MAJOR.MINOR.PATCH) with conventional commits for automated versioning. Use tools like semantic-release, standard-version, or custom GitHub Actions to analyze commit messages and determine version bumps. Generate changelogs from commit history, PR titles, or release notes. Automate tag creation, GitHub releases, and package publishing. Conventional commits format (feat:, fix:, BREAKING CHANGE:) enables automated categorization.
DevOps Use: Automate release processes, maintain consistent versioning across microservices, generate user-facing changelogs, and coordinate deployments with version tags.
Q113
Git / GitHub
Git / GitHub
How do you implement a CI/CD pipeline using GitHub Actions, Docker, and Terraform?
githubcicddockerterraform
Answer
Answer
Explanation: Create multi-stage pipeline: 1) Build stage - checkout code, run tests, build Docker images, 2) Security stage - scan images and code, 3) Deploy stage - use Terraform to provision infrastructure, deploy containers to orchestration platform. Use GitHub Actions workflows with job dependencies, environment-specific secrets, and approval gates for production. Store Terraform state remotely (S3, Terraform Cloud) and use workspaces for multiple environments.
DevOps Use: Automate application deployment, infrastructure provisioning, environment management, and rollback procedures with full traceability and approval workflows.
Q114
Git / GitHub
Git / GitHub
What are the best practices for writing modular and reusable GitHub Actions?
githubactionsbest-practices
Answer
Answer
Explanation: Best practices include: create composite actions for repeated logic, use inputs/outputs for parameterization, implement proper error handling, use semantic versioning for action releases, minimize dependencies, cache frequently used data, use official actions when possible, implement security scanning, document usage clearly, and test actions thoroughly. Store reusable actions in separate repositories or use local actions for repository-specific logic.
DevOps Use: Reduce workflow duplication, standardize deployment processes across projects, maintain consistent tooling versions, and enable team-wide automation standards.
Q115
Git / GitHub
Git / GitHub
What are Git hooks and how are they used in DevOps automation?
githooksautomation
Answer
Answer
Explanation: Git hooks are scripts that run automatically at specific Git events. Client-side hooks: pre-commit (before commit), prepare-commit-msg (edit commit message), commit-msg (validate message), post-commit (after commit). Server-side hooks: pre-receive (before push), update (per branch), post-receive (after push). Located in .git/hooks/, written in any scripting language. Hooks enable automated testing, code formatting, message validation, and deployment triggers.
DevOps Use: Pre-commit hooks run linting/testing, commit-msg enforces message standards, post-receive triggers CI/CD pipelines, and pre-push prevents bad commits from reaching remote.
Q116
Git / GitHub
Git / GitHub
What are Git submodules and when should you use them?
gitsubmodulesdependencies
Answer
Answer
Explanation: Submodules allow including one Git repository inside another as a subdirectory, maintaining separate version control. Useful for shared libraries, third-party dependencies, or splitting large projects. Commands: git submodule add , git submodule init, git submodule update. Submodules point to specific commits, not branches. Updates require explicit commands. Alternative approaches include subtrees (simpler) or package managers (language-specific).
DevOps Use: Managing shared infrastructure code, including common CI/CD scripts across projects, and maintaining consistent tooling versions across microservices.
Q117
Git / GitHub
Git / GitHub
What are Git workflow best practices for teams?
gitbest-practicesworkflow
Answer
Answer
Explanation: Best practices include: meaningful commit messages (conventional commits format), atomic commits (one logical change), frequent small commits over large ones, descriptive branch names (feature/user-auth), regular pulls to stay updated, code reviews via pull requests, protecting main branch, using .gitignore properly, and consistent branching strategy. Avoid committing secrets, large binaries, or generated files.
DevOps Use: Consistent workflows enable automated CI/CD triggers, improve code review processes, facilitate rollbacks, and maintain deployment traceability.
Q118
Git / GitHub
Git / GitHub
How do you handle large files in Git repositories?
gitlfslarge-files
Answer
Answer
Explanation: Git struggles with large files (>100MB) as it stores complete history. Solutions: Git LFS (Large File Storage) stores large files externally while keeping pointers in Git, .gitignore to exclude large files, git-annex for distributed large file management, or separate artifact storage (AWS S3, Artifactory). Git LFS tracks file types (.psd, .zip) and replaces them with text pointers, downloading actual files on checkout.
DevOps Use: Store build artifacts, Docker images, ML models, and media files in LFS or external storage, keeping repositories lightweight for faster CI/CD operations.
Q119
GitHub Actions
GitHub Actions
What is GitHub Actions and what problems does it solve?
github-actionsfundamentalsautomation
Answer
Answer
Explanation: GitHub Actions is a CI/CD platform that automates workflows directly within GitHub repositories. It solves problems like: manual deployment processes, inconsistent build environments, lack of automated testing, complex CI/CD setup, and integration challenges between development and operations. Actions provides event-driven automation, pre-built actions marketplace, integrated secrets management, and native GitHub integration for seamless DevOps workflows.
DevOps Use: Automates build, test, and deployment pipelines, enables GitOps workflows, provides infrastructure automation, and integrates with cloud services for complete DevOps lifecycle management.
Q120
GitHub Actions
GitHub Actions
What is a workflow in GitHub Actions and where are workflow files stored?
github-actionsworkflowsstructure
Answer
Answer
Explanation: A workflow is an automated process defined by a YAML file that runs when triggered by specific events. Workflows contain one or more jobs that execute in parallel or sequentially. Workflow files are stored in the `.github/workflows/` directory in the repository root. Each YAML file represents a separate workflow. Workflows can be triggered by repository events, schedules, or manual dispatch.
DevOps Use: Workflows orchestrate CI/CD pipelines, automate testing and deployment, handle release management, and integrate with external services for comprehensive automation.
Q121
GitHub Actions
GitHub Actions
What are the main parts of a workflow YAML (on, jobs, jobs.<id>, steps)?
github-actionsyamlstructure
Answer
Answer
Explanation: Workflow YAML structure includes: `on` (defines triggers like push, pull_request), `jobs` (contains all jobs in the workflow), `jobs.` (individual job definition with unique identifier), and `steps` (sequential actions within a job). Additional elements include `name` (workflow name), `env` (environment variables), `defaults` (default settings), and `permissions` (token permissions). Each job runs on a separate runner instance.
DevOps Use: Structured YAML enables version-controlled CI/CD definitions, parallel job execution, conditional logic, and integration with GitHub's event system.
Q122
GitHub Actions
GitHub Actions
What events can trigger a workflow (push, pull_request, workflow_dispatch, schedule, repository_dispatch)?
github-actionstriggersevents
Answer
Answer
Explanation: Common triggers include: `push` (code commits to branches), `pull_request` (PR creation/updates), `workflow_dispatch` (manual trigger with inputs), `schedule` (cron-based timing), `repository_dispatch` (external API triggers), `release` (GitHub releases), `issues` (issue events), and `workflow_run` (triggered by other workflows). Each trigger supports activity types and filters for precise control over when workflows execute.
DevOps Use: Different triggers enable various automation patterns: push for CI, pull_request for code review automation, schedule for maintenance tasks, and workflow_dispatch for manual deployments.
Q123
GitHub Actions
GitHub Actions
What's the difference between a job and a step? How are they executed?
github-actionsjobssteps
Answer
Answer
Explanation: Jobs are independent units of work that run on separate runner instances and can execute in parallel by default. Steps are sequential commands within a job that share the same runner environment, filesystem, and environment variables. Jobs can depend on other jobs using `needs`, creating execution order. Steps within a job execute sequentially and can pass data between them using outputs. Failed steps stop job execution unless `continue-on-error` is set.
DevOps Use: Jobs enable parallel execution for faster pipelines, while steps provide granular control within each job. Use jobs for different environments or independent tasks, steps for sequential operations.
Q124
GitHub Actions
GitHub Actions
What is runs-on and how do you choose between GitHub-hosted and self-hosted runners?
github-actionsrunnersinfrastructure
Answer
Answer
Explanation: `runs-on` specifies the runner environment for job execution. GitHub-hosted runners (ubuntu-latest, windows-latest, macos-latest) provide managed infrastructure with pre-installed tools, automatic updates, and clean environments per job. Self-hosted runners offer custom environments, internal network access, specific hardware, and cost control for high-volume usage. Choose based on requirements for tools, network access, performance, and cost.
DevOps Use: GitHub-hosted for standard CI/CD workflows, self-hosted for custom environments, internal services access, GPU workloads, or cost optimization at scale.
Q125
GitHub Actions
GitHub Actions
What are the pros and cons of using self-hosted runners vs GitHub-hosted runners?
github-actionsrunnerscomparison
Answer
Answer
Explanation: GitHub-hosted pros: managed infrastructure, clean environments, automatic updates, no maintenance. Cons: limited customization, no internal network access, usage costs, limited resources. Self-hosted pros: custom environments, internal network access, cost control, specific hardware. Cons: maintenance overhead, security responsibility, setup complexity, potential for environment drift. Consider security, maintenance, cost, and specific requirements when choosing.
DevOps Use: GitHub-hosted for standard workflows and quick setup. Self-hosted for enterprise environments, custom tooling, internal integrations, and high-volume usage scenarios.
Q126
GitHub Actions
GitHub Actions
What is an action? Explain the difference between JavaScript actions, Docker container actions, and composite actions.
github-actionsactionstypes
Answer
Answer
Explanation: Actions are reusable units of code that perform specific tasks. JavaScript actions run directly on runners using Node.js, offering fast execution and cross-platform compatibility. Docker container actions package code with dependencies in containers, providing consistent environments but slower startup. Composite actions combine multiple steps into reusable workflows, enabling step-level reusability without custom code. Each type serves different use cases based on complexity and requirements.
DevOps Use: JavaScript actions for simple, fast operations. Docker actions for complex dependencies or specific environments. Composite actions for reusable step sequences and workflow templates.
Q127
GitHub Actions
GitHub Actions
How do you create and publish a custom action (action.yml / action.yaml metadata)?
github-actionscustom-actionsdevelopment
Answer
Answer
Explanation: Create custom actions by defining metadata in `action.yml` or `action.yaml` file with: name, description, inputs, outputs, and runs configuration. For JavaScript actions, specify `runs.using: node16` and main script. For Docker actions, use `runs.using: docker` and Dockerfile. For composite actions, use `runs.using: composite` with steps. Publish to GitHub Marketplace or use directly from repositories. Version with Git tags for release management.
DevOps Use: Custom actions enable code reuse, standardize workflows across teams, create organization-specific tooling, and contribute to the community ecosystem.
Q128
GitHub Actions
GitHub Actions
How do you pass data between steps and between jobs (step outputs, job outputs, needs)?
github-actionsdata-passingoutputs
Answer
Answer
Explanation: Pass data between steps using step outputs: set with `echo "name=value" >> $GITHUB_OUTPUT`, access with `steps.step-id.outputs.name`. Pass data between jobs using job outputs: define in job's `outputs` section, access in dependent jobs via `needs.job-id.outputs.name`. Use `needs` keyword to create job dependencies and access outputs. For larger data, use artifacts or external storage.
DevOps Use: Data passing enables dynamic workflows, conditional execution, artifact coordination, and complex pipeline orchestration across multiple jobs and environments.
Q129
GitHub Actions
GitHub Actions
What are contexts and expressions (e.g., github, env, secrets, steps), and how do you evaluate them?
github-actionscontextsexpressions
Answer
Answer
Explanation: Contexts provide information about workflow runs, jobs, and steps. Common contexts: `github` (event data, repository info), `env` (environment variables), `secrets` (encrypted secrets), `steps` (step outputs), `job` (job status), `runner` (runner environment). Expressions use `${{ }}` syntax to evaluate contexts, perform calculations, and implement conditional logic. Functions like `contains()`, `startsWith()`, and `toJSON()` enable complex evaluations.
DevOps Use: Contexts enable dynamic workflows, conditional execution, environment-specific logic, and integration with GitHub's event system for sophisticated automation.
Q130
GitHub Actions
GitHub Actions
How do you securely use secrets in workflows, and where can secrets be stored (repo, environment, org)?
github-actionssecretssecurity
Answer
Answer
Explanation: Secrets are encrypted values accessible via `secrets` context. Storage levels: repository secrets (repo-wide access), environment secrets (environment-specific with protection rules), organization secrets (shared across repos). Secrets are automatically masked in logs and can't be printed directly. Use environment secrets for deployment credentials, repository secrets for general use, organization secrets for shared services. Implement least-privilege access and regular rotation.
DevOps Use: Secure credential management for deployments, API integrations, and external service access while maintaining security and compliance requirements.
Q131
GitHub Actions
GitHub Actions
What is OpenID Connect (OIDC) in the Actions context and why is it recommended over long-lived cloud credentials?
github-actionsoidcauthentication
Answer
Answer
Explanation: OIDC enables GitHub Actions to authenticate with cloud providers using short-lived tokens instead of storing long-lived credentials. GitHub acts as identity provider, issuing JWT tokens with claims about the workflow context. Cloud providers trust GitHub's OIDC and exchange tokens for temporary credentials. Benefits: no stored secrets, automatic rotation, fine-grained permissions, audit trails, and reduced credential management overhead.
DevOps Use: Secure cloud deployments without storing cloud credentials, automated credential rotation, compliance with security best practices, and reduced secret sprawl.
Q132
GitHub Actions
GitHub Actions
How does artifact upload/download work and what are retention/size considerations?
github-actionsartifactsstorage
Answer
Answer
Explanation: Artifacts store files between jobs or for later download using `actions/upload-artifact` and `actions/download-artifact`. Artifacts persist after workflow completion with configurable retention (default 90 days, max 400 days for private repos). Size limits: 10GB per artifact, 500MB for public repos. Artifacts are compressed automatically and can be downloaded from GitHub UI or API. Use for build outputs, test results, logs, and deployment packages.
DevOps Use: Share build artifacts between jobs, store deployment packages, preserve test results and logs, and enable manual artifact download for debugging.
Q133
GitHub Actions
GitHub Actions
How does caching work (dependency cache) and how do you use actions/cache or dependency caching to speed CI?
github-actionscachingperformance
Answer
Answer
Explanation: Caching stores dependencies and build outputs between workflow runs using `actions/cache`. Cache keys identify cached content, restore-keys provide fallback options. Dependency caching (actions/setup-node, actions/setup-python) automatically caches package managers' dependencies. Cache scope is limited to branch and can be shared with base branches. Effective caching reduces build times by avoiding repeated downloads and builds.
DevOps Use: Accelerate CI/CD pipelines, reduce network usage, lower infrastructure costs, and improve developer experience through faster feedback loops.
Q134
GitHub Actions
GitHub Actions
How do you implement matrix builds and what are tradeoffs (parallelism vs cost)?
github-actionsmatrixparallelism
Answer
Answer
Explanation: Matrix builds run jobs across multiple configurations using the `strategy.matrix` key. Define variables like OS, language versions, or custom parameters. GitHub creates separate jobs for each combination, running in parallel. Benefits: test multiple configurations simultaneously, faster overall execution. Tradeoffs: increased runner usage and costs, potential for job failures across matrix, complexity in result aggregation. Use `include`/`exclude` for fine-grained control.
DevOps Use: Cross-platform testing, multi-version compatibility checks, parallel deployment to multiple environments, and comprehensive test coverage.
Q135
GitHub Actions
GitHub Actions
What is workflow_dispatch and how do you add inputs for manual workflow runs?
github-actionsmanualinputs
Answer
Answer
Explanation: `workflow_dispatch` enables manual workflow triggering from GitHub UI or API with optional inputs. Define inputs with type (string, boolean, choice, environment), description, required flag, and default values. Inputs are accessible via `github.event.inputs` context. Useful for deployments, maintenance tasks, or parameterized workflows. Combine with branch selection for flexible manual execution across different branches.
DevOps Use: Manual deployments with parameters, maintenance workflows, emergency procedures, and user-driven automation with custom inputs.
Q136
GitHub Actions
GitHub Actions
How do you schedule workflows (cron) and what pitfalls should you watch for?
github-actionsschedulingcron
Answer
Answer
Explanation: Schedule workflows using `schedule` trigger with cron syntax: `cron: '0 2 * * *'` (daily at 2 AM UTC). Pitfalls: GitHub may delay scheduled workflows during high load, inactive repositories may have schedules disabled, cron runs on UTC time, and minimum interval is 5 minutes. Use multiple schedules for different frequencies, implement idempotency, and consider timezone implications for global teams.
DevOps Use: Automated maintenance tasks, dependency updates, backup operations, report generation, and periodic health checks.
Q137
GitHub Actions
GitHub Actions
How do environments and environment protection rules (reviews, wait timers) work in deployments?
github-actionsenvironmentsprotection
Answer
Answer
Explanation: Environments provide deployment targets with protection rules: required reviewers (manual approval), wait timers (deployment delays), and environment-specific secrets/variables. Jobs reference environments using `environment` key. Protection rules gate deployments, ensuring proper approvals and timing. Reviewers can approve/reject deployments, and wait timers provide cooling-off periods. Environment history tracks all deployments.
DevOps Use: Production deployment gates, compliance requirements, staged rollouts, and controlled release processes with proper approvals and audit trails.
Q138
GitHub Actions
GitHub Actions
How do you gate merges with GitHub Actions (required status checks / branch protection)?
github-actionsbranch-protectionquality-gates
Answer
Answer
Explanation: Configure branch protection rules to require status checks from specific workflows/jobs before merging. Status checks appear as required checks that must pass. Use job names or workflow names as status check contexts. Implement different requirements for different branches (main vs feature). Combine with required reviews, up-to-date branches, and administrator enforcement for comprehensive protection.
DevOps Use: Quality gates for code merges, automated testing requirements, security scanning mandates, and compliance with development standards.
Q139
GitHub Actions
GitHub Actions
How do you debug failing workflows (logs, ACTIONS_RUNNER_DEBUG, re-run jobs, step-level output)?
github-actionsdebuggingtroubleshooting
Answer
Answer
Explanation: Debug workflows using: workflow logs (step-by-step output), `ACTIONS_RUNNER_DEBUG=true` secret for verbose logging, re-run failed jobs or entire workflows, step-level output examination, and artifact inspection. Use `echo` statements for custom debugging, `set -x` for shell script debugging, and `toJSON()` for context inspection. GitHub provides downloadable logs and real-time log streaming.
DevOps Use: Troubleshoot CI/CD failures, identify environment issues, debug complex workflows, and maintain reliable automation pipelines.
Q140
GitHub Actions
GitHub Actions
Why should you pin actions (tags vs commit SHA), and what's the security impact of not pinning?
github-actionssecuritypinning
Answer
Answer
Explanation: Pin actions to specific versions for security and stability. Tags (v1, v1.2.3) are mutable and can be moved, commit SHAs are immutable. Security risks of not pinning: malicious code injection, supply chain attacks, unexpected breaking changes, and compliance violations. Best practice: pin to commit SHA for production, use Dependabot for updates, and regularly audit action dependencies.
DevOps Use: Supply chain security, reproducible builds, compliance requirements, and protection against malicious action updates.
Q141
GitHub Actions
GitHub Actions
What are reusable workflows and how do you call/use them from other repos/workflows?
github-actionsreusableworkflows
Answer
Answer
Explanation: Reusable workflows are workflow templates that can be called from other workflows using the `uses` keyword with `workflow_call` trigger. Define inputs, outputs, and secrets for parameterization. Call with `uses: org/repo/.github/workflows/workflow.yml@ref` and pass inputs/secrets. Enable workflow sharing across repositories, standardization, and centralized maintenance of common patterns.
DevOps Use: Standardize CI/CD processes across teams, reduce duplication, centralize workflow maintenance, and enforce organizational standards.
Q142
GitHub Actions
GitHub Actions
How do you write a robust release/deploy workflow (artifacts, immutable builds, promotion between environments)?
github-actionsreleasedeployment
Answer
Answer
Explanation: Robust release workflows include: immutable artifact creation, environment promotion strategy, rollback capabilities, and comprehensive testing. Build once and promote the same artifact through environments (devβstagingβprod). Use semantic versioning, create GitHub releases, implement approval gates, and maintain deployment history. Include health checks, monitoring integration, and automated rollback triggers.
DevOps Use: Reliable software delivery, consistent deployments, audit trails, and risk mitigation through proper release management practices.
Q143
GitHub Actions
GitHub Actions
How do you limit concurrency and cancel redundant runs (concurrency keyword)?
github-actionsconcurrencycontrol
Answer
Answer
Explanation: Use `concurrency` keyword to control parallel workflow execution. Define concurrency groups with unique identifiers, optionally cancel in-progress runs with `cancel-in-progress: true`. Useful for deployment workflows, resource-intensive operations, and preventing conflicts. Scope concurrency to workflow, job, or custom groups based on requirements. Helps manage runner usage and prevent deployment conflicts.
DevOps Use: Prevent deployment conflicts, manage resource usage, ensure sequential deployments, and optimize runner utilization.
Q144
GitHub Actions
GitHub Actions
What are common security pitfalls in Actions (untrusted marketplace actions, injection, secrets exposure) and mitigations?
github-actionssecuritypitfalls
Answer
Answer
Explanation: Common pitfalls: using untrusted marketplace actions, script injection via user inputs, secrets exposure in logs, overprivileged tokens, and supply chain attacks. Mitigations: pin actions to commit SHAs, validate and sanitize inputs, use intermediate environment variables, implement least-privilege permissions, audit action dependencies, and use OIDC instead of long-lived credentials.
DevOps Use: Secure CI/CD pipelines, protect sensitive data, maintain compliance, and prevent supply chain attacks through comprehensive security practices.
Q145
GitHub Actions
GitHub Actions
How do you secure and manage self-hosted runners (network isolation, labels, runner groups, least privilege)?
github-actionsself-hostedsecurity
Answer
Answer
Explanation: Secure self-hosted runners through: network isolation (VPCs, firewalls), ephemeral runners (fresh per job), least-privilege access, regular updates, and monitoring. Use runner groups for access control, labels for job targeting, and dedicated machines for sensitive workloads. Implement runner registration automation, health monitoring, and incident response procedures.
DevOps Use: Enterprise security requirements, internal network access, custom environments, and compliance with organizational security policies.
Q146
GitHub Actions
GitHub Actions
How does billing for GitHub Actions work (minutes, artifact storage, free for public repos, self-hosted exceptions)?
github-actionsbillingcosts
Answer
Answer
Explanation: GitHub Actions billing includes: compute minutes (varies by runner OS), artifact and log storage, and data transfer. Free tiers: unlimited for public repositories, monthly minutes for private repos (2000 for free accounts). Self-hosted runners don't consume GitHub minutes but require infrastructure costs. Premium features like larger runners and advanced security require paid plans.
DevOps Use: Cost optimization, resource planning, budget management, and choosing appropriate runner types based on cost-benefit analysis.
Q147
GitHub Actions
GitHub Actions
How do workflow commands and the Actions Toolkit work (annotations, setting outputs, masks)?
github-actionstoolkitcommands
Answer
Answer
Explanation: Workflow commands enable communication between actions and runner using special echo statements. Common commands: `::set-output` (deprecated, use $GITHUB_OUTPUT), `::add-mask` (hide values in logs), `::error`/`::warning` (annotations), `::group` (collapsible log sections). Actions Toolkit provides JavaScript/TypeScript libraries (@actions/core, @actions/github) for easier action development with proper typing and utilities.
DevOps Use: Custom action development, workflow debugging, log organization, and integration with GitHub's workflow system.
Q148
GitHub Actions
GitHub Actions
What are the best practices a junior DevOps engineer should follow when writing GitHub Actions workflows?
github-actionsbest-practicesjunior
Answer
Answer
Explanation: Best practices include: use small, focused steps for better debugging, implement proper error handling, pin actions to specific versions, use least-privilege permissions, avoid inline scripts (prefer actions), implement proper caching, use meaningful names and descriptions, add comments for complex logic, test workflows thoroughly, and follow security guidelines. Start simple and iterate based on requirements.
DevOps Use: Maintainable workflows, reliable automation, security compliance, team collaboration, and professional development practices.
Q149
GitOps
GitOps
What is GitOps, and how is it different from traditional CI/CD?
gitopsfundamentalscicd
Answer
Answer
Explanation: GitOps is a deployment methodology where Git repositories serve as the single source of truth for infrastructure and application configuration. Unlike traditional CI/CD where pipelines push changes to environments, GitOps uses pull-based deployment where operators continuously monitor Git repositories and automatically sync changes to target environments. Traditional CI/CD is push-based (CI system deploys), GitOps is pull-based (environment pulls changes). GitOps ensures declarative configuration, version control, and automated reconciliation.
DevOps Use: GitOps enables infrastructure as code, improves deployment consistency, provides audit trails, and reduces manual intervention in deployment processes.
Q150
GitOps
GitOps
What are the key principles of GitOps (declarative, version-controlled, automated reconciliation)?
gitopsprinciplesdeclarative
Answer
Answer
Explanation: GitOps principles include: 1) Declarative - infrastructure and applications described declaratively (YAML manifests), 2) Version-controlled - all configuration stored in Git with full history and branching, 3) Automated reconciliation - operators continuously compare desired state (Git) with actual state (cluster) and automatically fix drift, 4) Immutable - changes made through Git commits, not direct cluster modifications. These principles ensure consistency, traceability, and reliability.
DevOps Use: Principles enable consistent deployments, audit compliance, rollback capabilities, and collaborative infrastructure management through familiar Git workflows.
Q151
GitOps
GitOps
How does GitOps improve deployment reliability and traceability?
gitopsreliabilitytraceability
Answer
Answer
Explanation: GitOps improves reliability through: immutable deployments (changes only via Git), automated rollback (revert Git commits), drift detection and correction, and declarative desired state. Traceability benefits include: complete audit trail in Git history, who changed what and when, approval workflows via pull requests, and correlation between deployments and Git commits. Every change is tracked, reviewed, and reversible.
DevOps Use: Enhanced reliability reduces deployment failures and downtime. Complete traceability supports compliance, debugging, and incident response with clear change history.
Q152
GitOps
GitOps
What is the role of a Git repository in GitOps workflows?
gitopsgitrepository
Answer
Answer
Explanation: Git repository serves as the single source of truth containing: application manifests, infrastructure configuration, environment-specific settings, and deployment policies. It acts as the control plane for deployments, storing desired state declarations, providing version history, enabling collaboration through pull requests, and triggering automated deployments. Repository structure typically separates applications, environments, and shared configurations.
DevOps Use: Centralized configuration management, version control for infrastructure, collaborative change management, and automated deployment triggers based on Git events.
Q153
GitOps
GitOps
What is meant by the 'single source of truth' in GitOps?
gitopstruthconsistency
Answer
Answer
Explanation: Single source of truth means Git repository contains the authoritative, definitive configuration for all environments and applications. All changes must go through Git - no direct cluster modifications, no configuration stored elsewhere, and no manual interventions. This ensures consistency, prevents configuration drift, enables proper change management, and provides complete audit trails. Any discrepancy between Git and actual state triggers automatic reconciliation.
DevOps Use: Eliminates configuration inconsistencies, ensures all changes are tracked and approved, enables reliable rollbacks, and supports compliance requirements.
Q154
GitOps
GitOps
Name popular GitOps tools (Flux, Argo CD) and their main differences.
gitopstoolsargocdflux
Answer
Answer
Explanation: Popular GitOps tools include Argo CD and Flux. Argo CD offers: web UI for visualization, application-centric approach, multi-cluster management, and RBAC integration. Flux provides: lightweight architecture, Helm integration, image automation, and notification system. Key differences: Argo CD has rich UI and application focus, Flux is more lightweight and toolkit-based. Both support multi-tenancy, Git integration, and automated synchronization.
DevOps Use: Choose Argo CD for teams needing visual interfaces and application management. Choose Flux for lightweight deployments and advanced automation features.
Q155
GitOps
GitOps
What is the role of a GitOps operator or controller?
gitopsoperatorcontroller
Answer
Answer
Explanation: GitOps operators are Kubernetes controllers that continuously monitor Git repositories and cluster state, detecting differences and automatically applying changes to achieve desired state. They perform: Git polling or webhook listening, manifest parsing and validation, cluster state comparison, automated synchronization, and drift detection/correction. Operators run inside the cluster, maintaining security by pulling changes rather than requiring external access.
DevOps Use: Automated deployment management, continuous reconciliation, security through pull-based model, and self-healing infrastructure without manual intervention.
Q156
GitOps
GitOps
How does Argo CD synchronize Git state with a Kubernetes cluster?
gitopsargocdsynchronization
Answer
Answer
Explanation: Argo CD synchronization process: 1) Application defines Git repository, path, and target cluster, 2) Argo CD polls Git repository for changes, 3) Compares desired state (Git manifests) with live state (cluster resources), 4) Detects differences and shows sync status, 5) Applies changes automatically (if auto-sync enabled) or manually, 6) Monitors health and provides rollback capabilities. Supports Helm, Kustomize, and plain YAML manifests.
DevOps Use: Automated application deployment, visual sync status monitoring, health checking, and rollback capabilities with comprehensive application lifecycle management.
Q157
GitOps
GitOps
How does Flux detect changes in Git and apply them automatically?
gitopsfluxautomation
Answer
Answer
Explanation: Flux change detection works through: 1) Git polling at configured intervals or webhook notifications, 2) Source Controller monitors Git repositories and detects commits, 3) Kustomize/Helm Controllers process manifests and detect changes, 4) Reconciliation applies changes to cluster automatically, 5) Image automation can update Git with new container images. Flux uses GitRepository and Kustomization custom resources to define sources and sync behavior.
DevOps Use: Continuous deployment automation, image updates, multi-source synchronization, and notification integration for deployment events.
Q158
GitOps
GitOps
What is the difference between push-based and pull-based GitOps?
gitopspushpullsecurity
Answer
Answer
Explanation: Push-based GitOps: CI/CD system pushes changes directly to target environments, requires external access to production, and creates security risks. Pull-based GitOps: operators inside target environments pull changes from Git, no external access needed to production, improved security through network isolation, and better compliance. Pull-based is preferred for production environments due to security benefits and reduced attack surface.
DevOps Use: Pull-based provides better security posture, network isolation, and compliance for production deployments. Push-based may be suitable for development environments with less strict security requirements.
Q159
GitOps
GitOps
How do you implement environment promotion (dev β staging β prod) using GitOps?
gitopspromotionenvironments
Answer
Answer
Explanation: Environment promotion in GitOps uses: 1) Separate Git branches or directories for each environment, 2) Automated promotion through pull requests or branch merges, 3) Environment-specific configuration overlays (Kustomize/Helm), 4) Approval workflows for production promotions, 5) Automated testing in lower environments before promotion. Common patterns include branch-per-environment or directory-per-environment with promotion automation.
DevOps Use: Controlled deployment progression, automated testing gates, approval workflows, and consistent configuration management across environments.
Q160
GitOps
GitOps
How do you handle multiple clusters with GitOps?
gitopsmulti-clustermanagement
Answer
Answer
Explanation: Multi-cluster GitOps strategies include: 1) Cluster-per-environment (dev, staging, prod clusters), 2) Regional clusters for geographic distribution, 3) Tenant clusters for multi-tenancy, 4) Hub-and-spoke model with central management cluster, 5) Cluster-specific Git repositories or directories. Tools like Argo CD support multi-cluster management with cluster credentials and RBAC. Use cluster generators for dynamic cluster discovery.
DevOps Use: Geographic distribution, environment isolation, multi-tenancy, disaster recovery, and scalable infrastructure management across multiple Kubernetes clusters.
Q161
GitOps
GitOps
What is the typical GitOps workflow for updating an application (commit β pull request β sync β deploy)?
gitopsworkflowdeployment
Answer
Answer
Explanation: Typical GitOps workflow: 1) Developer commits application changes to Git, 2) CI builds and tests application, creates container image, 3) CI updates deployment manifests with new image tag, 4) Pull request created for manifest changes, 5) Code review and approval process, 6) Merge triggers GitOps operator, 7) Operator detects changes and syncs to cluster, 8) Application deployed with health monitoring. This ensures all changes go through proper review and version control.
DevOps Use: Controlled deployment process, code review for infrastructure changes, audit trails, and automated deployment with proper approvals.
Q162
GitOps
GitOps
How do you rollback to a previous state in GitOps?
gitopsrollbackrecovery
Answer
Answer
Explanation: GitOps rollback methods: 1) Git revert commits to previous working state, 2) Reset Git branch to previous commit, 3) Use GitOps tool rollback features (Argo CD rollback), 4) Maintain release tags for easy rollback points, 5) Automated rollback based on health checks or metrics. Rollbacks are version-controlled operations that create new commits, maintaining audit trails. Test rollback procedures regularly.
DevOps Use: Quick recovery from failed deployments, maintaining system availability, audit-compliant rollback procedures, and automated failure recovery.
Q163
GitOps
GitOps
How do you manage secrets in GitOps (Sealed Secrets, SOPS, HashiCorp Vault)?
gitopssecretssecurity
Answer
Answer
Explanation: GitOps secret management approaches: 1) Sealed Secrets - encrypt secrets that only cluster can decrypt, 2) SOPS - encrypt files with age/PGP keys, 3) External Secrets Operator - sync from external vaults, 4) HashiCorp Vault integration, 5) Cloud provider secret managers (AWS Secrets Manager, Azure Key Vault). Never store plain secrets in Git. Use encryption at rest and proper key management.
DevOps Use: Secure secret storage, automated secret rotation, compliance with security policies, and integration with existing secret management infrastructure.
Q164
GitOps
GitOps
How do GitOps workflows integrate with CI pipelines (build β push β GitOps sync)?
gitopsciintegration
Answer
Answer
Explanation: CI/GitOps integration: 1) CI builds application and creates container image, 2) CI pushes image to registry with tags, 3) CI updates GitOps repository with new image references, 4) GitOps operator detects changes and deploys, 5) Separation of concerns - CI handles build/test, GitOps handles deployment. Use image updater tools or CI scripts to update manifests. Maintain separate repositories for application code and deployment manifests.
DevOps Use: Automated deployment pipeline, separation of build and deploy concerns, consistent deployment process, and integration with existing CI/CD tools.
Q165
GitOps
GitOps
How do you structure Git repositories for multiple environments and microservices?
gitopsrepositorystructure
Answer
Answer
Explanation: Repository structure patterns: 1) Monorepo - single repo with environment directories, 2) Environment-per-repo - separate repos for each environment, 3) App-of-apps - separate repos per application with central orchestration, 4) Microservice-per-repo with shared infrastructure repo. Consider factors: team size, security boundaries, change frequency, and operational complexity. Use consistent directory structures and naming conventions.
DevOps Use: Organized configuration management, appropriate access controls, scalable repository structure, and clear ownership boundaries for different teams.
Q166
GitOps
GitOps
How do you handle configuration drift in a GitOps-managed cluster?
gitopsdriftcompliance
Answer
Answer
Explanation: Configuration drift handling: 1) Continuous monitoring by GitOps operators, 2) Automatic drift detection and correction, 3) Alerting on drift events, 4) Preventing manual cluster changes through RBAC, 5) Regular drift reports and compliance checks. GitOps operators continuously reconcile desired state (Git) with actual state (cluster), automatically correcting unauthorized changes. Implement admission controllers to prevent drift.
DevOps Use: Maintaining configuration consistency, compliance enforcement, security posture management, and automated remediation of unauthorized changes.
Q167
GitOps
GitOps
How do you validate manifests before applying them (linting, policy checks, Kustomize/Helm)?
gitopsvalidationpolicy
Answer
Answer
Explanation: Manifest validation approaches: 1) YAML linting for syntax errors, 2) Kubernetes schema validation, 3) Policy engines (OPA Gatekeeper, Polaris), 4) Security scanning (Falco, Trivy), 5) Kustomize/Helm template validation, 6) Dry-run deployments in test environments. Implement validation in CI pipelines and GitOps operators. Use admission controllers for runtime policy enforcement.
DevOps Use: Prevent deployment failures, enforce security policies, maintain configuration standards, and catch errors before production deployment.
Q168
GitOps
GitOps
What is automatic reconciliation, and how frequently should the GitOps operator sync?
gitopsreconciliationautomation
Answer
Answer
Explanation: Automatic reconciliation is the continuous process where GitOps operators compare desired state (Git) with actual state (cluster) and automatically apply corrections. Sync frequency considerations: balance between responsiveness and resource usage, typical intervals 1-5 minutes, webhook-based triggers for immediate sync, exponential backoff for failures. Configure based on change frequency, criticality, and resource constraints.
DevOps Use: Continuous compliance, automated drift correction, timely deployment of changes, and self-healing infrastructure without manual intervention.
Q169
GitOps
GitOps
How do you secure Git repositories used in GitOps (branch protection, signed commits)?
gitopssecuritygit
Answer
Answer
Explanation: Git repository security measures: 1) Branch protection rules requiring reviews and status checks, 2) Signed commits with GPG keys for authenticity, 3) Access controls and RBAC for repository permissions, 4) Audit logging for all repository activities, 5) Secret scanning to prevent credential exposure, 6) Two-factor authentication for all users. Implement security scanning in CI pipelines and regular security audits.
DevOps Use: Secure infrastructure configuration, compliance with security policies, audit trails for changes, and protection against unauthorized modifications.
Q170
GitOps
GitOps
How do you manage RBAC and access control in GitOps tools like Argo CD?
gitopsrbacaccess-control
Answer
Answer
Explanation: RBAC in GitOps tools: 1) Integration with identity providers (OIDC, LDAP, SAML), 2) Role-based permissions for applications and clusters, 3) Project-based access control, 4) Environment-specific permissions, 5) Audit logging for all actions. Argo CD supports fine-grained RBAC with policies for different user roles (developers, operators, admins). Implement least-privilege access principles.
DevOps Use: Secure multi-tenant environments, controlled access to production systems, compliance with organizational policies, and proper separation of duties.
Q171
GitOps
GitOps
What is the role of GPG or commit signing in GitOps?
gitopsgpgsigning
Answer
Answer
Explanation: GPG commit signing provides: 1) Cryptographic proof of commit authorship, 2) Integrity verification of commit contents, 3) Non-repudiation for audit purposes, 4) Protection against commit spoofing, 5) Compliance with security policies. In GitOps, signed commits ensure that infrastructure changes are authentic and traceable. Implement signing policies and verification in CI/CD pipelines.
DevOps Use: Enhanced security posture, audit compliance, protection against supply chain attacks, and verification of infrastructure changes authenticity.
Q172
GitOps
GitOps
How do you implement automated approval policies for production changes?
gitopsapprovalpolicies
Answer
Answer
Explanation: Automated approval policies: 1) Pull request workflows with required reviewers, 2) Automated testing and validation gates, 3) Policy engines for compliance checking, 4) Time-based approval windows, 5) Risk-based approval requirements (high-risk changes need more approvals). Integrate with identity systems, implement escalation procedures, and maintain audit trails. Use tools like OPA for policy enforcement.
DevOps Use: Controlled production deployments, compliance with change management policies, risk mitigation, and automated governance without manual bottlenecks.
Q173
GitOps
GitOps
What are common pitfalls of GitOps (large manifests, secret leaks, drift, manual edits)?
gitopspitfallsbest-practices
Answer
Answer
Explanation: Common GitOps pitfalls: 1) Large manifest files causing performance issues, 2) Secrets accidentally committed to Git, 3) Configuration drift from manual cluster edits, 4) Overly complex repository structures, 5) Inadequate testing of manifest changes, 6) Poor secret management practices, 7) Lack of proper RBAC and access controls. Implement proper tooling, training, and processes to avoid these issues.
DevOps Use: Avoiding common mistakes improves GitOps adoption success, maintains security posture, and ensures reliable deployment processes.
Q174
GitOps
GitOps
How do you monitor and alert on GitOps deployments?
gitopsmonitoringalerting
Answer
Answer
Explanation: GitOps monitoring includes: 1) Deployment status and health monitoring, 2) Sync status and drift detection alerts, 3) Application health and performance metrics, 4) Git repository activity monitoring, 5) Integration with observability platforms (Prometheus, Grafana), 6) Notification systems (Slack, email, PagerDuty). Monitor both GitOps operator health and deployed application health.
DevOps Use: Proactive issue detection, deployment success tracking, operational visibility, and rapid incident response for GitOps-managed applications.
Q175
GitOps
GitOps
How do you test GitOps workflows before deploying to production?
gitopstestingvalidation
Answer
Answer
Explanation: GitOps testing strategies: 1) Manifest validation and linting in CI, 2) Dry-run deployments in test environments, 3) Integration testing with temporary clusters, 4) Canary deployments for gradual rollout, 5) Automated rollback testing, 6) Policy validation and compliance checking. Use tools like kind, k3s, or cloud-based test clusters for validation.
DevOps Use: Preventing production failures, validating configuration changes, ensuring deployment reliability, and maintaining system stability through comprehensive testing.
Q176
GitOps
GitOps
How do you handle Helm charts and Kustomize overlays in GitOps workflows?
gitopshelmkustomize
Answer
Answer
Explanation: Helm and Kustomize integration: 1) Helm charts for complex applications with templating, 2) Kustomize overlays for environment-specific customizations, 3) GitOps tools support both natively (Argo CD, Flux), 4) Version management for charts and overlays, 5) Values files for environment-specific configurations. Use Helm for packaging, Kustomize for customization, and GitOps for deployment automation.
DevOps Use: Simplified application packaging, environment-specific customizations, reusable configuration templates, and standardized deployment patterns.
Q177
GitOps
GitOps
How do you rollback a failed deployment safely in GitOps?
gitopsrollbacksafety
Answer
Answer
Explanation: Safe GitOps rollback procedures: 1) Automated health checks triggering rollback, 2) Git revert to previous working commit, 3) Database migration rollback strategies, 4) Traffic shifting for gradual rollback, 5) Monitoring and validation during rollback, 6) Communication and incident response procedures. Test rollback procedures regularly and maintain rollback runbooks.
DevOps Use: Rapid recovery from failed deployments, minimizing downtime, maintaining service availability, and ensuring data consistency during rollbacks.
Q178
GitOps
GitOps
How do GitOps practices scale in multi-team, multi-cluster Kubernetes environments?
gitopsscalingmulti-team
Answer
Answer
Explanation: Scaling GitOps for multi-team environments: 1) Repository structure supporting team boundaries, 2) RBAC and access controls per team/cluster, 3) Standardized GitOps patterns and tooling, 4) Self-service capabilities for teams, 5) Central platform team managing GitOps infrastructure, 6) Monitoring and governance across all deployments. Implement proper tenant isolation and resource quotas.
DevOps Use: Organizational scalability, team autonomy with governance, consistent deployment practices, and efficient resource utilization across multiple teams and clusters.
Q179
Jenkins
Jenkins
What is Jenkins, and why is it used in DevOps?
jenkinsfundamentalsdevops
Answer
Answer
Explanation: Jenkins is an open-source automation server that enables Continuous Integration and Continuous Delivery (CI/CD) by automating the building, testing, and deployment of applications. It provides a web-based interface, extensive plugin ecosystem, and distributed build capabilities. Jenkins automates repetitive tasks, integrates with various tools (Git, Docker, cloud platforms), and provides feedback loops for development teams. It supports pipeline-as-code, parallel execution, and scalable architecture.
DevOps Use: Jenkins orchestrates CI/CD pipelines, automates testing and deployment, integrates development and operations workflows, and provides visibility into build and deployment processes.
Q180
Jenkins
Jenkins
What is the difference between Jenkins and other CI/CD tools (like GitHub Actions or GitLab CI)?
jenkinscomparisontools
Answer
Answer
Explanation: Jenkins is self-hosted with maximum flexibility and extensive plugin ecosystem, requiring infrastructure management. GitHub Actions is cloud-native with tight GitHub integration and marketplace actions. GitLab CI is integrated into GitLab platform with built-in features. Key differences: Jenkins offers most flexibility but requires maintenance, GitHub Actions excels in GitHub ecosystems, GitLab CI provides integrated DevOps platform. Choose based on existing infrastructure, team expertise, and integration requirements.
DevOps Use: Jenkins for complex enterprise environments, GitHub Actions for GitHub-centric workflows, GitLab CI for integrated DevOps platform needs.
Q181
Jenkins
Jenkins
What are Jenkins master and agent/slave nodes?
jenkinsarchitecturemasteragent
Answer
Answer
Explanation: Jenkins master (now called controller) manages the overall system: scheduling builds, dispatching jobs to agents, monitoring agents, and serving the web interface. Jenkins agents (formerly slaves) are worker nodes that execute the actual build jobs. Master-agent architecture enables distributed builds, load distribution, and environment-specific builds. Agents can be permanent (always connected) or ephemeral (created on-demand). Communication happens via JNLP or SSH protocols.
DevOps Use: Scalable build infrastructure, parallel job execution, environment isolation, and resource optimization across multiple machines or containers.
Q182
Jenkins
Jenkins
How do you install Jenkins, and what are the common installation methods?
jenkinsinstallationsetup
Answer
Answer
Explanation: Common Jenkins installation methods: 1) Package managers (apt, yum, brew), 2) WAR file deployment on application servers, 3) Docker containers for containerized deployments, 4) Cloud marketplace images (AWS, Azure, GCP), 5) Kubernetes using Helm charts, 6) Windows installer for Windows environments. Each method has different use cases: packages for traditional servers, Docker for containerized environments, cloud images for quick cloud deployment.
DevOps Use: Choose installation method based on infrastructure requirements, scalability needs, maintenance preferences, and existing technology stack.
Q183
Jenkins
Jenkins
What are Jenkins plugins, and why are they important?
jenkinspluginsextensibility
Answer
Answer
Explanation: Jenkins plugins extend core functionality by adding integrations, build steps, post-build actions, and UI enhancements. Popular plugins include Git, Pipeline, Blue Ocean, Docker, AWS, and Slack. Plugins enable Jenkins to integrate with virtually any tool or service. They're installed through the Plugin Manager and can be updated independently. Plugin ecosystem is Jenkins' greatest strength, providing solutions for specific needs without bloating the core system.
DevOps Use: Tool integrations, custom build steps, notification systems, cloud integrations, and specialized functionality for different technology stacks.
Q184
Jenkins
Jenkins
What are the different types of Jenkins jobs (Freestyle, Pipeline, Multibranch Pipeline, etc.)?
jenkinsjobstypes
Answer
Answer
Explanation: Jenkins job types include: 1) Freestyle Project - simple, UI-configured jobs for basic automation, 2) Pipeline - code-based jobs using Jenkinsfile, 3) Multibranch Pipeline - automatically creates pipelines for each branch, 4) Organization Folders - scans entire organizations for repositories, 5) Multi-configuration Project - matrix builds across different configurations, 6) External Job - monitors external processes. Each serves different use cases from simple automation to complex CI/CD workflows.
DevOps Use: Choose job types based on complexity: Freestyle for simple tasks, Pipeline for complex workflows, Multibranch for feature branch workflows, Organization Folders for multiple repositories.
Q185
Jenkins
Jenkins
What is a Jenkins Pipeline, and what are its advantages over Freestyle jobs?
jenkinspipelineadvantages
Answer
Answer
Explanation: Jenkins Pipeline is a suite of plugins supporting implementing and integrating continuous delivery pipelines using code (Jenkinsfile). Advantages over Freestyle jobs: 1) Pipeline as Code - version controlled, reviewable, 2) Complex workflows - conditional logic, parallel execution, 3) Durability - survives Jenkins restarts, 4) Extensibility - custom steps and shared libraries, 5) Visualization - Blue Ocean interface, 6) Reusability - shared pipeline libraries. Pipelines support both Declarative and Scripted syntax.
DevOps Use: Complex CI/CD workflows, infrastructure as code practices, collaborative pipeline development, and advanced deployment strategies.
Q186
Jenkins
Jenkins
What is the difference between Declarative and Scripted pipelines?
jenkinspipelinedeclarativescripted
Answer
Answer
Explanation: Declarative Pipeline uses structured, predefined syntax with pipeline blocks (agent, stages, steps), providing simpler syntax, built-in error handling, and easier validation. Scripted Pipeline uses Groovy-based programming with node blocks, offering maximum flexibility, programmatic control, and complex logic capabilities. Declarative is recommended for most use cases due to simplicity and maintainability. Scripted is used for complex scenarios requiring advanced programming constructs.
DevOps Use: Declarative for standard CI/CD workflows and team collaboration. Scripted for complex automation requiring advanced logic, dynamic behavior, or extensive customization.
Q187
Jenkins
Jenkins
How do you define a Jenkinsfile and where should it be stored?
jenkinsjenkinsfilepipeline
Answer
Answer
Explanation: Jenkinsfile is a text file containing Pipeline definition using Declarative or Scripted syntax. It should be stored in the root of the source code repository and named 'Jenkinsfile' (no extension). This enables Pipeline as Code, version control integration, and branch-specific pipeline configurations. Jenkinsfile can also be stored in Jenkins (though not recommended) or in a separate repository for shared pipelines. Use SCM integration for automatic pipeline updates.
DevOps Use: Version-controlled pipeline definitions, branch-specific configurations, collaborative pipeline development, and automated pipeline updates with code changes.
Q188
Jenkins
Jenkins
How do you trigger a Jenkins job manually, via Git commits, or on a schedule?
jenkinstriggersautomation
Answer
Answer
Explanation: Jenkins job triggers include: 1) Manual - 'Build Now' button or API calls, 2) SCM polling - periodically checks for changes, 3) Webhooks - Git repositories notify Jenkins of changes, 4) Scheduled - cron-like syntax for time-based triggers, 5) Upstream/downstream - triggered by other job completion, 6) Remote triggers - API with authentication tokens. Webhooks are preferred over polling for efficiency. Use different triggers based on workflow requirements.
DevOps Use: Automated CI/CD workflows, scheduled maintenance tasks, manual deployment controls, and integration with external systems through APIs.
Q189
Jenkins
Jenkins
How do you integrate Jenkins with GitHub, GitLab, or Bitbucket?
jenkinsscmintegration
Answer
Answer
Explanation: SCM integration involves: 1) Installing relevant plugins (GitHub, GitLab, Bitbucket), 2) Configuring credentials (SSH keys, personal access tokens), 3) Setting up webhooks for automatic triggering, 4) Configuring branch sources for multibranch pipelines, 5) Setting up status notifications back to SCM. Integration enables automatic builds on commits, pull request validation, status reporting, and branch discovery. Use organization scanning for multiple repositories.
DevOps Use: Automated CI/CD triggers, pull request validation, status reporting, branch-based workflows, and seamless developer experience.
Q190
Jenkins
Jenkins
How do you handle build artifacts in Jenkins, and how do you archive them?
jenkinsartifactsstorage
Answer
Answer
Explanation: Build artifacts are files produced during builds (binaries, packages, reports). Jenkins handles artifacts through: 1) Archive Artifacts post-build action, 2) Artifact retention policies, 3) Fingerprinting for tracking, 4) Artifact promotion between jobs, 5) External artifact repositories (Nexus, Artifactory). Use archiveArtifacts step in pipelines with patterns to specify files. Configure retention policies to manage storage. Artifacts can be downloaded from Jenkins UI or accessed via API.
DevOps Use: Build output preservation, deployment package management, test result storage, and artifact promotion through deployment pipeline stages.
Q191
Jenkins
Jenkins
How do you implement continuous delivery with Jenkins pipelines?
jenkinscddelivery
Answer
Answer
Explanation: Continuous Delivery implementation includes: 1) Multi-stage pipeline (build, test, deploy), 2) Environment promotion (devβstagingβprod), 3) Automated testing at each stage, 4) Manual approval gates for production, 5) Rollback capabilities, 6) Infrastructure as Code integration. Use pipeline stages, input steps for approvals, and parallel execution for efficiency. Implement proper artifact promotion and environment-specific configurations.
DevOps Use: Automated deployment pipelines, quality gates, environment consistency, risk mitigation through staged deployments, and rapid delivery capabilities.
Q192
Jenkins
Jenkins
How do you manage environment-specific configurations in Jenkins?
jenkinsconfigurationenvironments
Answer
Answer
Explanation: Environment configuration management approaches: 1) Environment variables in job configuration, 2) Properties files for different environments, 3) Parameter injection during builds, 4) External configuration management (Consul, etcd), 5) Pipeline parameters and input steps, 6) Credential binding for secrets. Use different strategies: environment-specific jobs, parameterized builds, or configuration templates. Implement proper separation between code and configuration.
DevOps Use: Environment consistency, secure configuration management, deployment flexibility, and separation of concerns between application code and environment settings.
Q193
Jenkins
Jenkins
How do you implement multi-branch pipelines and PR builds in Jenkins?
jenkinsmultibranchpr-builds
Answer
Answer
Explanation: Multi-branch pipelines automatically discover branches and create corresponding pipeline jobs. Implementation: 1) Create Multibranch Pipeline job, 2) Configure branch sources (Git, GitHub, etc.), 3) Set branch discovery strategies, 4) Configure build strategies (all branches, PRs only), 5) Set up webhook notifications, 6) Implement branch-specific Jenkinsfiles. PR builds validate changes before merge, providing feedback to developers and maintaining main branch quality.
DevOps Use: Feature branch workflows, pull request validation, automated testing for all branches, and maintaining code quality through pre-merge validation.
Q194
Jenkins
Jenkins
What are Jenkins agents, and how do you configure them?
jenkinsagentsconfiguration
Answer
Answer
Explanation: Jenkins agents execute build jobs on behalf of the master. Configuration methods: 1) Static agents - permanently connected via SSH or JNLP, 2) Dynamic agents - cloud-based, created on-demand, 3) Docker agents - containerized build environments, 4) Kubernetes agents - pods created for each build. Configuration involves: connection method, labels, executors, working directory, and environment variables. Agents provide scalability and environment isolation.
DevOps Use: Scalable build infrastructure, environment-specific builds, resource optimization, and parallel job execution across multiple machines or containers.
Q195
Jenkins
Jenkins
How do you scale Jenkins with multiple agents for faster builds?
jenkinsscalingperformance
Answer
Answer
Explanation: Jenkins scaling strategies: 1) Horizontal scaling with multiple agents, 2) Agent pools for different workloads, 3) Cloud-based dynamic agents (AWS, Azure, GCP), 4) Container-based agents (Docker, Kubernetes), 5) Load balancing across agents, 6) Pipeline parallelization. Use labels to route jobs to appropriate agents, implement auto-scaling for cloud agents, and optimize resource utilization. Monitor agent performance and adjust capacity based on demand.
DevOps Use: Improved build performance, cost optimization through dynamic scaling, resource specialization, and handling increased development team size.
Q196
Jenkins
Jenkins
How do you handle agent labels and node-specific builds?
jenkinslabelsagents
Answer
Answer
Explanation: Agent labels categorize agents by capabilities, environment, or purpose (linux, windows, docker, gpu). Use labels in pipeline agent directives or job configuration to route builds to appropriate agents. Label strategies: environment-based (dev, prod), capability-based (docker, maven), or resource-based (high-memory, gpu). Implement label hierarchies and use expressions for complex requirements. Labels enable workload distribution and environment-specific builds.
DevOps Use: Environment-specific deployments, resource optimization, capability-based routing, and maintaining build environment consistency.
Q197
Jenkins
Jenkins
How do you troubleshoot agent connectivity issues?
jenkinstroubleshootingconnectivity
Answer
Answer
Explanation: Agent connectivity troubleshooting steps: 1) Check network connectivity and firewall rules, 2) Verify agent logs for connection errors, 3) Validate credentials and permissions, 4) Test port accessibility (SSH: 22, JNLP: 50000), 5) Check Java versions compatibility, 6) Verify agent working directory permissions, 7) Review master logs for connection attempts. Common issues: network restrictions, credential problems, Java version mismatches, and resource constraints.
DevOps Use: Maintaining reliable build infrastructure, minimizing build delays, ensuring consistent agent availability, and supporting distributed development teams.
Q198
Jenkins
Jenkins
What is the difference between permanent agents and ephemeral/temporary agents?
jenkinsagentspermanentephemeral
Answer
Answer
Explanation: Permanent agents are always-on, persistent connections that remain available for multiple builds. Ephemeral agents are created on-demand for specific builds and destroyed afterward. Permanent agents: faster job startup, consistent environment, resource dedication. Ephemeral agents: clean environment per build, cost efficiency, auto-scaling, isolation. Choose based on workload patterns, cost considerations, and security requirements. Cloud providers offer both models.
DevOps Use: Permanent agents for consistent workloads and fast startup. Ephemeral agents for variable workloads, cost optimization, and enhanced security through isolation.
Q199
Jenkins
Jenkins
How do you secure Jenkins (authentication, authorization, credentials)?
jenkinssecurityauthentication
Answer
Answer
Explanation: Jenkins security involves: 1) Authentication - LDAP, Active Directory, OAuth, or built-in user database, 2) Authorization - Matrix-based security, Role-based access control, 3) Credentials management - encrypted storage, scoped access, 4) HTTPS/TLS encryption, 5) CSRF protection, 6) Agent security, 7) Plugin security updates. Enable security realm, configure authorization strategy, use credential binding, and implement regular security audits. Follow principle of least privilege.
DevOps Use: Protecting CI/CD infrastructure, securing sensitive credentials, maintaining compliance, and preventing unauthorized access to build systems.
Q200
Jenkins
Jenkins
How do you store and manage secrets in Jenkins pipelines?
jenkinssecretscredentials
Answer
Answer
Explanation: Jenkins secret management: 1) Credentials Plugin for encrypted storage, 2) Credential binding in pipelines, 3) Environment variable injection, 4) External secret management (Vault, AWS Secrets Manager), 5) Credential scoping (global, folder, job), 6) Secret masking in logs. Use withCredentials step in pipelines, avoid hardcoding secrets, implement credential rotation, and audit secret access. Never expose secrets in build logs or artifacts.
DevOps Use: Secure deployment credentials, API keys, database passwords, and integration with external systems while maintaining security and compliance.
Q201
Jenkins
Jenkins
How do you implement role-based access control (RBAC) in Jenkins?
jenkinsrbacpermissions
Answer
Answer
Explanation: Jenkins RBAC implementation: 1) Role-based Authorization Strategy plugin, 2) Define roles with specific permissions, 3) Assign users/groups to roles, 4) Folder-level permissions for project isolation, 5) Global vs project-specific roles, 6) Integration with external identity providers. Create roles like Developer, Tester, Admin with appropriate permissions. Use folder structure for team/project isolation. Implement least-privilege principle.
DevOps Use: Multi-team environments, project isolation, compliance requirements, and controlled access to different pipeline stages and environments.
Q202
Jenkins
Jenkins
How do you prevent exposing secrets in pipeline logs?
jenkinssecretssecurity
Answer
Answer
Explanation: Secret exposure prevention: 1) Use credential binding instead of environment variables, 2) Avoid echo/print statements with secrets, 3) Enable automatic secret masking, 4) Use intermediate variables carefully, 5) Implement log sanitization, 6) Review pipeline code for secret handling, 7) Use external secret management. Jenkins automatically masks bound credentials in logs. Be careful with string manipulation and concatenation of secrets.
DevOps Use: Maintaining security compliance, protecting sensitive information, preventing credential theft, and ensuring audit trail integrity.
Q203
Jenkins
Jenkins
What are best practices for Jenkinsfile structure and pipeline modularity?
jenkinsbest-practicespipeline
Answer
Answer
Explanation: Pipeline best practices: 1) Use Declarative syntax for simplicity, 2) Implement proper error handling and cleanup, 3) Use shared libraries for common functions, 4) Implement parallel execution where possible, 5) Use meaningful stage names and descriptions, 6) Implement proper artifact management, 7) Use pipeline parameters for flexibility, 8) Implement proper logging and notifications. Structure pipelines with clear stages, use functions for reusability, and maintain consistent patterns.
DevOps Use: Maintainable pipelines, team collaboration, code reusability, and consistent CI/CD patterns across projects.
Q204
Jenkins
Jenkins
How do you monitor Jenkins jobs and pipelines (notifications, logs, dashboards)?
jenkinsmonitoringobservability
Answer
Answer
Explanation: Jenkins monitoring approaches: 1) Built-in monitoring (build history, trends), 2) Email/Slack notifications for build status, 3) Blue Ocean for pipeline visualization, 4) Monitoring plugins (Prometheus, Datadog), 5) Log aggregation (ELK stack), 6) Custom dashboards (Grafana), 7) Health checks and system monitoring. Monitor build success rates, duration trends, queue lengths, and system resources. Implement alerting for failures and performance issues.
DevOps Use: Proactive issue detection, performance optimization, team notifications, and maintaining CI/CD pipeline health and reliability.
Q205
Jenkins
Jenkins
How do you back up Jenkins configurations and jobs?
jenkinsbackupdisaster-recovery
Answer
Answer
Explanation: Jenkins backup strategies: 1) JENKINS_HOME directory backup (complete system backup), 2) Configuration as Code (JCasC) for reproducible setups, 3) Job DSL for programmatic job creation, 4) Plugin-based backup solutions, 5) Database backups for external storage, 6) Version control for Jenkinsfiles and configurations. Implement regular automated backups, test restore procedures, and maintain backup retention policies. Store backups securely and separately from Jenkins instance.
DevOps Use: Disaster recovery, system migration, configuration management, and maintaining business continuity for CI/CD operations.
Q206
Jenkins
Jenkins
How do you handle pipeline failures and automatic retries?
jenkinsfailuresretry
Answer
Answer
Explanation: Pipeline failure handling: 1) Try-catch blocks for error handling, 2) Retry step for transient failures, 3) Post-build actions for cleanup, 4) Conditional execution based on build status, 5) Parallel execution with failure tolerance, 6) Custom error reporting and notifications. Implement proper logging, graceful degradation, and recovery procedures. Use retry with exponential backoff for network-related failures.
DevOps Use: Improved pipeline reliability, handling transient issues, maintaining deployment success rates, and reducing manual intervention requirements.
Q207
Jenkins
Jenkins
How do you integrate Jenkins with Docker for containerized builds?
jenkinsdockercontainers
Answer
Answer
Explanation: Jenkins-Docker integration methods: 1) Docker Pipeline plugin for pipeline steps, 2) Docker agents for containerized build environments, 3) Docker-in-Docker for building images, 4) Docker outside of Docker (DooD) for security, 5) Kubernetes plugin for pod-based agents, 6) Docker Compose for multi-container builds. Use docker.image().inside() for containerized builds, implement proper volume mounting, and manage Docker daemon access securely.
DevOps Use: Consistent build environments, dependency isolation, scalable infrastructure, and modern containerized deployment workflows.
Q208
Jenkins
Jenkins
What are common pitfalls for juniors using Jenkins and how can they be avoided?
jenkinspitfallsjuniorbest-practices
Answer
Answer
Explanation: Common junior pitfalls: 1) Hardcoding credentials in pipelines, 2) Not using version control for Jenkinsfiles, 3) Overly complex pipeline logic, 4) Ignoring error handling, 5) Not implementing proper testing, 6) Poor resource management, 7) Inadequate logging and monitoring. Avoid by: following security best practices, using Pipeline as Code, keeping pipelines simple, implementing proper error handling, and learning from experienced team members.
DevOps Use: Improved team productivity, reduced security risks, better pipeline maintainability, and faster onboarding of new team members.
Q209
Kubernetes
Kubernetes
What is Kubernetes and why is it used in DevOps?
kubernetesfundamentalsorchestration
Answer
Answer
Explanation: Kubernetes is an open-source container orchestration platform that automates deployment, scaling, and management of containerized applications. It provides features like service discovery, load balancing, storage orchestration, automated rollouts/rollbacks, and self-healing capabilities. In DevOps, Kubernetes enables consistent application deployment across environments, supports microservices architecture, integrates with CI/CD pipelines, and provides infrastructure abstraction for developers.
DevOps Use: Kubernetes standardizes deployment processes, enables blue-green deployments, supports canary releases, and integrates with monitoring tools for observability in production environments.
Q210
Kubernetes
Kubernetes
What are the key components of the Kubernetes architecture (master/control plane, nodes, kubelet, etcd, kube-proxy)?
kubernetesarchitecturecomponents
Answer
Answer
Explanation: Kubernetes architecture consists of Control Plane (master) and Worker Nodes. Control Plane includes: API Server (REST API gateway), etcd (distributed key-value store for cluster state), Scheduler (assigns pods to nodes), Controller Manager (maintains desired state). Worker Nodes contain: kubelet (node agent communicating with API server), kube-proxy (network proxy for services), and Container Runtime (Docker/containerd). This distributed architecture ensures high availability and scalability.
DevOps Use: Understanding architecture helps with cluster troubleshooting, capacity planning, security hardening, and designing resilient multi-master setups for production environments.
Q211
Kubernetes
Kubernetes
What is the difference between a pod, node, and cluster?
kubernetesfundamentalsconcepts
Answer
Answer
Explanation: A Pod is the smallest deployable unit containing one or more containers sharing network and storage, typically running a single application. A Node is a physical or virtual machine running kubelet, kube-proxy, and container runtime, hosting multiple pods. A Cluster is a set of nodes managed by the control plane, providing the complete Kubernetes environment. Pods are ephemeral and scheduled on nodes; nodes provide compute resources; clusters provide the orchestration platform.
DevOps Use: Pods define application deployment units, nodes determine resource allocation and placement, clusters provide the foundation for multi-environment deployments and disaster recovery strategies.
Q212
Kubernetes
Kubernetes
What is a deployment in Kubernetes, and how does it differ from a pod?
kubernetesdeploymentspods
Answer
Answer
Explanation: A Deployment is a higher-level controller that manages ReplicaSets and Pods, providing declarative updates, rolling deployments, and rollback capabilities. While Pods are individual instances, Deployments ensure desired number of pod replicas, handle pod failures, and manage updates without downtime. Deployments use ReplicaSets to maintain pod count and provide features like rolling updates, pause/resume, and revision history that individual pods cannot provide.
DevOps Use: Deployments enable zero-downtime updates, automated scaling, rollback strategies, and integration with CI/CD pipelines for continuous deployment workflows.
Q213
Kubernetes
Kubernetes
What is a namespace and why is it important?
kubernetesnamespacesisolation
Answer
Answer
Explanation: Namespaces provide virtual clusters within a physical cluster, enabling resource isolation, access control, and multi-tenancy. They separate resources logically (pods, services, deployments) while sharing the same physical infrastructure. Default namespaces include: default (user resources), kube-system (system components), kube-public (publicly readable), and kube-node-lease (node heartbeats). Namespaces enable resource quotas, network policies, and RBAC boundaries.
DevOps Use: Namespaces separate environments (dev/staging/prod), teams, or applications, enabling resource quotas, security policies, and simplified management in shared clusters.
Q214
Kubernetes
Kubernetes
What is a pod in Kubernetes, and what are its main characteristics?
kubernetespodscontainers
Answer
Answer
Explanation: A Pod is the atomic unit of deployment containing one or more containers sharing the same network (IP address, port space) and storage volumes. Key characteristics: ephemeral (can be created/destroyed), scheduled as a unit, containers share localhost networking, shared storage volumes, and same lifecycle. Pods typically run single applications but can include helper containers (sidecars) for logging, monitoring, or data processing.
DevOps Use: Pods enable microservices deployment, sidecar patterns for logging/monitoring, and provide the foundation for service mesh architectures and observability strategies.
Q215
Kubernetes
Kubernetes
How do ReplicaSets ensure high availability?
kubernetesreplicasetsavailability
Answer
Answer
Explanation: ReplicaSets maintain a specified number of pod replicas by continuously monitoring pod health and creating/deleting pods to match the desired state. They use label selectors to identify managed pods and respond to node failures, pod crashes, or manual deletions by spawning replacement pods. ReplicaSets provide horizontal scaling, fault tolerance, and load distribution across nodes, ensuring application availability even during infrastructure failures.
DevOps Use: ReplicaSets enable automatic recovery from failures, horizontal scaling for load handling, and provide the foundation for rolling updates and blue-green deployments.
Q216
Kubernetes
Kubernetes
How does a deployment manage rolling updates and rollbacks?
kubernetesdeploymentsupdates
Answer
Answer
Explanation: Deployments manage rolling updates by creating a new ReplicaSet with updated pod template while gradually scaling down the old ReplicaSet. Update strategies include RollingUpdate (default, zero-downtime) and Recreate (terminates all pods before creating new ones). Rolling updates use maxUnavailable and maxSurge parameters to control update pace. Rollbacks use revision history to revert to previous ReplicaSet versions, enabling quick recovery from failed deployments.
DevOps Use: Rolling updates enable continuous deployment with zero downtime, while rollbacks provide quick recovery mechanisms for failed releases in production environments.
Q217
Kubernetes
Kubernetes
What is a DaemonSet, and when would you use it?
kubernetesdaemonsetnode-services
Answer
Answer
Explanation: DaemonSet ensures that a copy of a pod runs on all (or selected) nodes in the cluster. Unlike Deployments that focus on replica count, DaemonSets focus on node coverage. Common use cases include node monitoring agents (Prometheus Node Exporter), log collection (Fluentd), network plugins (Calico), and storage daemons. DaemonSets automatically schedule pods on new nodes and remove them when nodes are deleted.
DevOps Use: DaemonSets deploy infrastructure services like monitoring agents, log collectors, security scanners, and network components that need to run on every node.
Q218
Kubernetes
Kubernetes
What is a StatefulSet, and how does it differ from a Deployment?
kubernetesstatefulsetstateful-apps
Answer
Answer
Explanation: StatefulSet manages stateful applications requiring stable network identities, persistent storage, and ordered deployment/scaling. Unlike Deployments with interchangeable pods, StatefulSets provide: stable pod names (web-0, web-1), persistent storage per pod, ordered startup/shutdown, and stable network identities. Used for databases, message queues, and applications requiring data persistence or cluster coordination.
DevOps Use: StatefulSets deploy databases (MySQL, PostgreSQL), message brokers (Kafka, RabbitMQ), and distributed systems requiring stable identities and persistent data.
Q219
Kubernetes
Kubernetes
What is a Kubernetes service, and what types are there (ClusterIP, NodePort, LoadBalancer, ExternalName)?
kubernetesservicesnetworking
Answer
Answer
Explanation: Services provide stable network endpoints for accessing pods, abstracting away pod IP changes. Types: ClusterIP (default, internal cluster access only), NodePort (exposes service on each node's IP at a static port), LoadBalancer (cloud provider creates external load balancer), ExternalName (maps service to external DNS name). Services use label selectors to route traffic to matching pods and provide load balancing across healthy endpoints.
DevOps Use: Services enable service discovery, load balancing, and external access patterns for microservices architectures and integration with external load balancers.
Q220
Kubernetes
Kubernetes
How does Kubernetes handle service discovery?
kubernetesservice-discoverydns
Answer
Answer
Explanation: Kubernetes provides automatic service discovery through DNS and environment variables. The cluster DNS (CoreDNS) creates DNS records for services, allowing pods to resolve service names to IP addresses. Services get DNS names like service-name.namespace.svc.cluster.local. Environment variables are injected into pods for services existing at pod creation time. Service discovery enables loose coupling between microservices and dynamic service location.
DevOps Use: Service discovery enables microservices communication, service mesh integration, and dynamic scaling without hardcoded IP addresses in application configurations.
Q221
Kubernetes
Kubernetes
What is a headless service, and when would you use it?
kubernetesheadless-servicestateful
Answer
Answer
Explanation: Headless services (clusterIP: None) don't provide load balancing or a single service IP. Instead, DNS queries return all pod IPs directly, allowing clients to choose which pod to connect to. Used for stateful applications needing direct pod access, service discovery without load balancing, or custom load balancing logic. Common with StatefulSets where applications need to connect to specific pod instances.
DevOps Use: Headless services support database clusters, peer-to-peer applications, and custom service discovery patterns where direct pod communication is required.
Q222
Kubernetes
Kubernetes
How does Kubernetes implement ingress, and what is an Ingress Controller?
kubernetesingressrouting
Answer
Answer
Explanation: Ingress provides HTTP/HTTPS routing to services based on hostnames and paths, offering features like SSL termination, virtual hosting, and path-based routing. Ingress resources define routing rules, while Ingress Controllers (nginx, traefik, istio) implement these rules by configuring load balancers. Ingress enables external access to cluster services with advanced routing capabilities beyond basic LoadBalancer services.
DevOps Use: Ingress enables domain-based routing, SSL certificate management, and cost-effective external access compared to multiple LoadBalancer services.
Q223
Kubernetes
Kubernetes
How does network policy work in Kubernetes?
kubernetesnetwork-policysecurity
Answer
Answer
Explanation: NetworkPolicies provide firewall rules for pod-to-pod communication using label selectors to define traffic rules. They specify ingress (incoming) and egress (outgoing) rules with source/destination pods, namespaces, or IP blocks. By default, all pod communication is allowed; NetworkPolicies enable zero-trust networking by denying traffic except explicitly allowed connections. Requires CNI plugins supporting NetworkPolicy (Calico, Cilium, Weave).
DevOps Use: NetworkPolicies implement micro-segmentation, comply with security requirements, and isolate sensitive workloads in multi-tenant environments.
Q224
Kubernetes
Kubernetes
What is a ConfigMap and how is it different from a Secret?
kubernetesconfigmapsecrets
Answer
Answer
Explanation: ConfigMaps store non-sensitive configuration data as key-value pairs, while Secrets store sensitive data (passwords, tokens, keys) with base64 encoding and additional security features. Both can be mounted as volumes or environment variables. Secrets have stricter RBAC controls, are stored in etcd with encryption at rest (if configured), and have size limits. ConfigMaps are for application configuration; Secrets are for credentials and sensitive data.
DevOps Use: ConfigMaps manage application settings across environments; Secrets handle database passwords, API keys, and certificates in secure deployment pipelines.
Q225
Kubernetes
Kubernetes
How do you mount persistent storage in Kubernetes (PersistentVolume, PersistentVolumeClaim)?
kubernetesstoragepersistence
Answer
Answer
Explanation: PersistentVolumes (PV) represent storage resources in the cluster, while PersistentVolumeClaims (PVC) are requests for storage by pods. The binding process matches PVC requirements (size, access modes, storage class) with available PVs. Pods reference PVCs in volume specifications. This abstraction separates storage provisioning from consumption, enabling dynamic provisioning and storage lifecycle management independent of pod lifecycle.
DevOps Use: Persistent storage enables stateful applications, database deployments, and data persistence across pod restarts in production environments.
Q226
Kubernetes
Kubernetes
What are storage classes and dynamic provisioning?
kubernetesstorage-classprovisioning
Answer
Answer
Explanation: StorageClasses define different types of storage (SSD, HDD, network storage) with specific parameters and provisioners. Dynamic provisioning automatically creates PersistentVolumes when PersistentVolumeClaims reference a StorageClass, eliminating manual PV creation. StorageClasses specify provisioner (AWS EBS, GCE PD, Azure Disk), parameters (disk type, replication), and reclaim policies. This enables on-demand storage allocation and standardized storage tiers.
DevOps Use: StorageClasses enable automated storage provisioning, standardize storage tiers across environments, and integrate with cloud provider storage services.
Q227
Kubernetes
Kubernetes
How do you perform volume expansion or resizing in Kubernetes?
kubernetesvolume-expansionstorage
Answer
Answer
Explanation: Volume expansion allows increasing PVC size without data loss. Requirements: StorageClass must support expansion (allowVolumeExpansion: true), underlying storage system must support online resizing, and file system expansion may require pod restart. Process: edit PVC spec to increase size, Kubernetes triggers storage expansion, file system expansion occurs automatically or requires pod restart depending on the storage driver.
DevOps Use: Volume expansion enables scaling storage for growing databases, log storage, and applications without downtime or data migration.
Q228
Kubernetes
Kubernetes
How do you inject environment variables and configuration into containers?
kubernetesenvironmentconfiguration
Answer
Answer
Explanation: Environment variables can be injected through: direct specification in pod spec, ConfigMap references (configMapRef, configMapKeyRef), Secret references (secretRef, secretKeyRef), field references (metadata.name, status.podIP), and resource field references (limits, requests). ConfigMaps and Secrets can be mounted as volumes for file-based configuration. This enables externalized configuration following twelve-factor app principles.
DevOps Use: Environment injection enables configuration management across environments, secrets management, and application portability without code changes.
Q229
Kubernetes
Kubernetes
How do you scale pods manually and automatically (Horizontal Pod Autoscaler)?
kubernetesscalinghpa
Answer
Answer
Explanation: Manual scaling uses kubectl scale deployment name --replicas=N or editing deployment spec. Horizontal Pod Autoscaler (HPA) automatically scales based on CPU utilization, memory usage, or custom metrics. HPA requires metrics-server for resource metrics and custom metrics APIs for advanced scaling. Vertical Pod Autoscaler (VPA) adjusts resource requests/limits. HPA monitors metrics, calculates desired replicas, and updates deployment scale.
DevOps Use: Auto-scaling handles traffic spikes, optimizes resource utilization, and reduces costs by scaling down during low usage periods.
Q230
Kubernetes
Kubernetes
How do you perform rolling updates safely?
kubernetesrolling-updatessafety
Answer
Answer
Explanation: Safe rolling updates require: readiness/liveness probes to verify pod health, appropriate maxUnavailable/maxSurge settings to control update pace, resource limits to prevent resource exhaustion, and rollback strategy for failures. Use deployment strategies like blue-green or canary for critical applications. Monitor application metrics during updates and implement automated rollback triggers for failed deployments.
DevOps Use: Safe rolling updates enable continuous deployment with minimal risk, automated rollback capabilities, and integration with monitoring systems for deployment validation.
Q231
Kubernetes
Kubernetes
What are labels and selectors, and how are they used in Kubernetes?
kuberneteslabelsselectors
Answer
Answer
Explanation: Labels are key-value pairs attached to objects for identification and organization. Selectors query objects based on labels using equality (app=web) or set-based (app in (web,api)) operators. Labels enable grouping, filtering, and relationships between resources. Services use selectors to find pods, deployments use labels for pod templates, and operators use labels for resource management. Labels are fundamental to Kubernetes' declarative model.
DevOps Use: Labels enable resource organization, monitoring queries, deployment strategies, and automated operations based on resource characteristics.
Q232
Kubernetes
Kubernetes
How do taints and tolerations work in scheduling pods?
kubernetestaintstolerations
Answer
Answer
Explanation: Taints are applied to nodes to repel pods unless they have matching tolerations. Taints have key, value, and effect (NoSchedule, PreferNoSchedule, NoExecute). Tolerations in pod specs allow scheduling on tainted nodes. Common uses: dedicated nodes for specific workloads, node maintenance, and hardware-specific scheduling. NoExecute effect evicts existing pods without tolerations, while NoSchedule only affects new scheduling.
DevOps Use: Taints and tolerations enable node specialization, maintenance workflows, and workload isolation for performance or security requirements.
Q233
Kubernetes
Kubernetes
What are affinity and anti-affinity rules?
kubernetesaffinityscheduling
Answer
Answer
Explanation: Affinity rules influence pod scheduling based on labels of nodes (nodeAffinity) or other pods (podAffinity/podAntiAffinity). NodeAffinity replaces nodeSelector with more expressive rules (required/preferred). PodAffinity schedules pods near related pods; podAntiAffinity spreads pods apart. Rules can be required (hard) or preferred (soft) with weight-based preferences. Topology keys define the scope of affinity rules (zone, node, region).
DevOps Use: Affinity rules enable high availability through pod spreading, performance optimization by co-locating related services, and compliance with data locality requirements.
Q234
Kubernetes
Kubernetes
How do you secure Kubernetes clusters (RBAC, NetworkPolicy, Secrets management)?
kubernetessecurityrbac
Answer
Answer
Explanation: Kubernetes security involves multiple layers: RBAC (Role-Based Access Control) for API access, NetworkPolicies for network segmentation, Pod Security Standards for pod security contexts, Secrets management with encryption at rest, admission controllers for policy enforcement, and regular security updates. Implement least privilege access, enable audit logging, use service meshes for mTLS, and scan images for vulnerabilities.
DevOps Use: Security measures enable compliance, protect sensitive workloads, and provide audit trails for production environments and regulatory requirements.
Q235
Kubernetes
Kubernetes
What are ServiceAccounts and how are they used?
kubernetesserviceaccountsauthentication
Answer
Answer
Explanation: ServiceAccounts provide identity for pods to authenticate with the Kubernetes API server. Each namespace has a default ServiceAccount automatically assigned to pods. Custom ServiceAccounts enable fine-grained RBAC permissions for different applications. ServiceAccounts are bound to Roles/ClusterRoles through RoleBindings/ClusterRoleBindings. Pods receive ServiceAccount tokens as mounted secrets for API authentication.
DevOps Use: ServiceAccounts enable secure API access for applications, CI/CD pipelines, monitoring systems, and operators requiring cluster interactions.
Q236
Kubernetes
Kubernetes
How do you monitor pods and clusters (metrics, logging, alerts)?
kubernetesmonitoringobservability
Answer
Answer
Explanation: Kubernetes monitoring involves multiple components: metrics-server for resource metrics, Prometheus for custom metrics and alerting, Grafana for visualization, and logging solutions like ELK stack or Fluentd. Monitor cluster health (node status, resource usage), application metrics (response time, error rates), and infrastructure metrics (CPU, memory, network). Implement alerting for critical conditions and use distributed tracing for complex applications.
DevOps Use: Monitoring enables proactive issue detection, capacity planning, performance optimization, and SLA compliance in production environments.
Q237
Kubernetes
Kubernetes
How do you debug a failing pod (kubectl describe, logs, events)?
kubernetesdebuggingtroubleshooting
Answer
Answer
Explanation: Pod debugging follows a systematic approach: kubectl get pods shows status, kubectl describe pod reveals events and conditions, kubectl logs shows container output, kubectl exec enables interactive debugging. Common issues include image pull errors, resource constraints, configuration problems, and networking issues. Use kubectl get events for cluster-wide events and kubectl top for resource usage. Debug init containers and multi-container pods separately.
DevOps Use: Effective debugging reduces mean time to resolution, improves application reliability, and enables faster troubleshooting in production incidents.
Q238
Kubernetes
Kubernetes
What are common best practices for Kubernetes configuration and deployment for DevOps juniors?
kubernetesbest-practicesconfiguration
Answer
Answer
Explanation: Key best practices include: use namespaces for organization, implement resource limits and requests, configure health checks (readiness/liveness probes), use ConfigMaps/Secrets for configuration, implement proper RBAC, use labels consistently, avoid latest tags, implement monitoring and logging, use Helm for package management, and follow GitOps practices. Start with small, focused deployments and gradually add complexity.
DevOps Use: Best practices ensure reliable deployments, easier troubleshooting, better resource utilization, and maintainable configurations in production environments.
Q239
Linux
Linux
What is Linux and why is it widely used in DevOps?
linuxbasicsdevops
Answer
Answer
Explanation: Linux is an open-source, Unix-like operating system kernel that forms the foundation of many distributions. It's widely used in DevOps because of its stability, security, flexibility, and extensive command-line tools. Linux provides excellent container support, automation capabilities, and integrates seamlessly with DevOps tools like Docker, Kubernetes, Jenkins, and cloud platforms. Its open-source nature allows customization and cost-effective scaling.
DevOps Use: Linux servers host applications, run CI/CD pipelines, manage containers, and provide the foundation for cloud infrastructure and automation scripts.
Q240
Linux
Linux
What are the major Linux distributions and differences (Ubuntu, CentOS, RHEL, Debian)?
linuxdistributionscomparison
Answer
Answer
Explanation: Major distributions include: Ubuntu (Debian-based, user-friendly, regular releases, apt package manager), CentOS (RHEL clone, enterprise-focused, yum/dnf, long support cycles), RHEL (Red Hat Enterprise Linux, commercial support, enterprise features), Debian (stable, conservative updates, apt, community-driven). Each targets different use cases: Ubuntu for development/cloud, CentOS/RHEL for enterprise servers, Debian for stability-critical systems.
DevOps Use: Choose distributions based on requirements: Ubuntu for cloud/containers, CentOS/RHEL for enterprise infrastructure, Debian for stable production systems.
Q241
Linux
Linux
What is the Linux file system hierarchy (/, /etc, /var, /home, /usr, /bin)?
linuxfilesystemhierarchy
Answer
Answer
Explanation: Linux follows Filesystem Hierarchy Standard (FHS): / (root directory, top level), /etc (system configuration files), /var (variable data like logs, databases), /home (user home directories), /usr (user programs and data), /bin (essential user binaries), /sbin (system binaries), /lib (shared libraries), /tmp (temporary files), /opt (optional software packages). This standardized structure ensures consistency across distributions.
DevOps Use: Understanding hierarchy helps locate configuration files, logs, applications, and user data for system administration and automation scripts.
Q243
Linux
Linux
How do you create, copy, move, and delete files and directories? (touch, cp, mv, rm, mkdir)
linuxfile-operationscommands
Answer
Answer
Explanation: File operations: touch creates empty files or updates timestamps, cp copies files/directories (cp -r for recursive), mv moves/renames files and directories, rm removes files (rm -r for directories, rm -rf for force), mkdir creates directories (mkdir -p for parent directories). These commands form the foundation of file management in Linux systems.
DevOps Use: Essential for deployment scripts, backup operations, log management, and infrastructure automation tasks.
Q244
Linux
Linux
What are Linux file permissions and what do rwx mean?
linuxpermissionssecurity
Answer
Answer
Explanation: Linux permissions use rwx notation: r (read, value 4), w (write, value 2), x (execute, value 1). Permissions apply to three categories: owner, group, others. Example: rwxr-xr-- means owner has full access (7), group can read/execute (5), others can only read (4). Directories need execute permission for access. Permissions ensure security and access control in multi-user environments.
DevOps Use: Proper permissions secure configuration files, scripts, and sensitive data while allowing necessary access for applications and users.
Q245
Linux
Linux
How do you change permissions and ownership? (chmod, chown, chgrp)
linuxpermissionsownership
Answer
Answer
Explanation: chmod changes permissions using numeric (chmod 755 file) or symbolic (chmod u+x file) notation. chown changes ownership (chown user:group file), chgrp changes group ownership. Common patterns: chmod +x (make executable), chmod 644 (rw-r--r--), chmod 755 (rwxr-xr-x). Use -R flag for recursive changes on directories.
DevOps Use: Essential for securing applications, setting script permissions, managing web server files, and ensuring proper access controls in deployment pipelines.
Q246
Linux
Linux
What is the difference between a user, group, and root in Linux?
linuxusersgroups
Answer
Answer
Explanation: Users are individual accounts with unique IDs (UID), groups are collections of users sharing permissions (GID), and root is the superuser (UID 0) with unlimited system access. Users belong to primary and secondary groups. Root can perform any operation, while regular users have restricted access. Groups simplify permission management by allowing access control for multiple users simultaneously.
DevOps Use: User/group management enables secure multi-user environments, application isolation, and proper access controls for different team members and services.
Q247
Linux
Linux
How do you switch users and run commands as another user (su, sudo)?
linuxusersprivileges
Answer
Answer
Explanation: su (switch user) changes to another user account, requiring their password. sudo (superuser do) executes commands as another user (usually root) using your password, if authorized in /etc/sudoers. su - provides full login environment, sudo preserves current environment. sudo provides better security through logging, time limits, and granular permissions compared to sharing root password.
DevOps Use: sudo enables secure administrative access, audit logging, and controlled privilege escalation in production environments without sharing root credentials.
Q248
Linux
Linux
What are setuid, setgid, and sticky bit?
linuxpermissionssecurity
Answer
Answer
Explanation: Special permissions: setuid (4000) makes executable run with owner's privileges, setgid (2000) runs with group privileges or makes new files inherit directory group, sticky bit (1000) restricts deletion to file owners only (common on /tmp). These provide advanced access control beyond standard rwx permissions. Example: passwd command uses setuid to modify /etc/passwd as root.
DevOps Use: Special permissions enable secure privilege escalation for specific applications and protect shared directories in multi-user environments.
Q249
Linux
Linux
How do you view running processes? (ps, top, htop)
linuxprocessesmonitoring
Answer
Answer
Explanation: ps shows process snapshots: ps aux (all processes), ps -ef (full format), ps -u user (user processes). top provides real-time process monitoring with CPU/memory usage, sorting options. htop offers enhanced interface with colors, tree view, and easier navigation. These tools help monitor system performance, identify resource-heavy processes, and troubleshoot issues.
DevOps Use: Process monitoring enables performance troubleshooting, resource optimization, identifying runaway processes, and capacity planning for production systems.
Q250
Linux
Linux
How do you kill or stop a process? (kill, killall, pkill)
linuxprocessescontrol
Answer
Answer
Explanation: Process termination commands: kill sends signals to processes by PID (kill -9 PID for force kill), killall terminates by process name, pkill kills by pattern matching. Common signals: TERM (15, graceful shutdown), KILL (9, force terminate), HUP (1, reload configuration). Always try graceful termination before force killing to prevent data corruption.
DevOps Use: Process management enables service restarts, handling hung applications, and emergency system recovery in production environments.
Q251
Linux
Linux
How do you run commands in the background and bring them to the foreground? (&, fg, bg)
linuxjobsmultitasking
Answer
Answer
Explanation: Background execution: append & to run commands in background, Ctrl+Z suspends current job, bg resumes suspended job in background, fg brings background job to foreground, jobs lists active jobs. nohup runs commands immune to hangups. This enables multitasking and long-running processes without blocking terminal sessions.
DevOps Use: Background jobs enable running long deployments, monitoring scripts, and maintenance tasks without blocking interactive terminal sessions.
Q252
Linux
Linux
How do you schedule jobs using cron and at?
linuxschedulingautomation
Answer
Answer
Explanation: cron schedules recurring jobs using crontab format: minute hour day month weekday command. Examples: 0 2 * * * (daily at 2 AM), */15 * * * * (every 15 minutes). at schedules one-time jobs: at 2pm tomorrow. crontab -e edits user crontab, crontab -l lists jobs. System-wide cron jobs go in /etc/crontab or /etc/cron.d/.
DevOps Use: Automated scheduling enables backups, log rotation, system maintenance, monitoring checks, and deployment tasks without manual intervention.
Q253
Linux
Linux
What is a daemon process?
linuxdaemonsservices
Answer
Answer
Explanation: Daemon processes run in the background without terminal attachment, typically starting at boot and providing system services. They usually end with 'd' (httpd, sshd, mysqld). Daemons detach from controlling terminal, run as system users, and handle requests or perform maintenance tasks. Modern systems use systemd to manage daemons as services.
DevOps Use: Daemons provide essential services like web servers, databases, monitoring agents, and system services required for application infrastructure.
Q254
Linux
Linux
How do you check IP addresses and network interfaces? (ifconfig, ip addr)
linuxnetworkinginterfaces
Answer
Answer
Explanation: Network interface commands: ifconfig shows/configures network interfaces (deprecated but still common), ip addr show displays interface information (modern replacement), ip link shows physical interfaces. These commands reveal IP addresses, MAC addresses, interface status, and network configuration. ip command is more powerful and preferred in modern distributions.
DevOps Use: Network troubleshooting, server configuration verification, and network automation scripts require interface information for proper connectivity.
Q255
Linux
Linux
How do you check network connectivity? (ping, curl, wget, telnet, netstat)
linuxnetworkingconnectivity
Answer
Answer
Explanation: Connectivity tools: ping tests ICMP reachability and latency, curl/wget test HTTP connectivity and download files, telnet tests TCP port connectivity, netstat shows network connections and listening ports. Each serves different purposes: ping for basic connectivity, curl/wget for web services, telnet for port testing, netstat for connection analysis.
DevOps Use: Network troubleshooting, service health checks, API testing, and connectivity verification in deployment and monitoring scripts.
Q256
Linux
Linux
How do you open or monitor ports and services? (ss, lsof, systemctl)
linuxportsservices
Answer
Answer
Explanation: Port monitoring: ss shows socket statistics (modern netstat replacement), lsof lists open files including network connections, systemctl manages systemd services. Examples: ss -tuln (listening TCP/UDP ports), lsof -i :80 (processes using port 80), systemctl status service (service status). These tools help identify port usage, service status, and network connections.
DevOps Use: Service monitoring, port conflict resolution, security auditing, and troubleshooting network connectivity issues in production environments.
Q257
Linux
Linux
How do you start, stop, enable, and disable services? (systemctl start/stop/enable/disable)
linuxservicessystemctl
Answer
Answer
Explanation: systemctl manages systemd services: start (begin service), stop (end service), enable (auto-start at boot), disable (prevent auto-start), restart (stop and start), reload (reload configuration), status (show service state). Examples: systemctl start nginx, systemctl enable docker. Enable/disable affects boot behavior, start/stop affects current state.
DevOps Use: Service management enables automated deployments, system maintenance, and ensuring critical services start automatically after reboots.
Q258
Linux
Linux
How do you troubleshoot DNS and routing issues? (nslookup, dig, traceroute)
linuxdnsrouting
Answer
Answer
Explanation: DNS/routing tools: nslookup queries DNS servers for domain resolution, dig provides detailed DNS information with better output, traceroute shows network path to destination with hop-by-hop latency. These tools help diagnose connectivity issues, DNS resolution problems, and network routing paths. dig is preferred over nslookup for scripting and detailed analysis.
DevOps Use: Troubleshooting application connectivity, verifying DNS configurations, and diagnosing network performance issues in distributed systems.
Q259
Linux
Linux
How do you install, update, and remove packages? (apt, yum, dnf, rpm)
linuxpackagesinstallation
Answer
Answer
Explanation: Package managers: apt (Debian/Ubuntu): apt install/update/remove, yum/dnf (RHEL/CentOS): yum install/update/remove, rpm (low-level): rpm -i/-U/-e. Package managers handle dependencies, updates, and system integration. Always update package lists (apt update) before installing. Use package manager appropriate for your distribution.
DevOps Use: Automated software installation, system updates, and dependency management in deployment scripts and infrastructure automation.
Q260
Linux
Linux
How do you check installed packages and package versions?
linuxpackagesverification
Answer
Answer
Explanation: Package information commands: apt list --installed (Debian/Ubuntu), yum list installed (RHEL/CentOS), dpkg -l (Debian packages), rpm -qa (RPM packages). For specific packages: apt show package, yum info package, dpkg -s package. These commands help verify installations, check versions, and audit system software.
DevOps Use: System auditing, security compliance, version verification, and dependency tracking for infrastructure management and security assessments.
Q261
Linux
Linux
How do you add or remove repositories?
linuxrepositoriessources
Answer
Answer
Explanation: Repository management: apt uses /etc/apt/sources.list and /etc/apt/sources.list.d/, add with add-apt-repository or manual editing. yum/dnf uses /etc/yum.repos.d/ directory with .repo files. Examples: add-apt-repository ppa:user/repo, yum-config-manager --add-repo url. Always verify repository authenticity and import GPG keys for security.
DevOps Use: Adding software repositories enables installation of third-party applications, development tools, and specialized software not in default repositories.
Q262
Linux
Linux
How do you search for files and content? (find, grep, locate)
linuxsearchfiles
Answer
Answer
Explanation: Search tools: find searches filesystem by name, size, type, permissions (find /path -name '*.txt'), grep searches file content for patterns (grep 'pattern' file), locate uses database for fast filename searches (updatedb to refresh). Combine tools: find /path -name '*.log' -exec grep 'error' {} \; for powerful searches.
DevOps Use: Log analysis, configuration file location, troubleshooting, and automation scripts requiring file discovery and content analysis.
Q263
Linux
Linux
How do you use pipes and redirection (|, >, >>, <)?
linuxpipesredirection
Answer
Answer
Explanation: I/O redirection: | (pipe) sends output to another command, > redirects output to file (overwrites), >> appends to file, < reads input from file, 2> redirects errors, &> redirects both output and errors. Examples: ls | grep txt, echo 'text' > file.txt, command 2>&1 | tee log.txt. These enable powerful command chaining and output management.
DevOps Use: Log processing, data transformation, automation scripts, and creating complex command pipelines for system administration tasks.
Q264
Linux
Linux
How do you write a basic shell script and make it executable?
linuxscriptingautomation
Answer
Answer
Explanation: Shell script basics: start with shebang (#!/bin/bash), write commands in sequence, use variables (VAR=value), conditionals (if/then/else), loops (for/while). Make executable with chmod +x script.sh, run with ./script.sh. Include error handling, comments, and proper variable quoting for robust scripts.
DevOps Use: Automation scripts for deployments, system maintenance, monitoring, and repetitive tasks in CI/CD pipelines and infrastructure management.
Q265
Linux
Linux
How do you schedule automated scripts and jobs in Linux?
linuxschedulingautomation
Answer
Answer
Explanation: Script scheduling methods: cron for recurring tasks (crontab -e), at for one-time tasks, systemd timers for advanced scheduling, anacron for systems not always running. Include logging, error handling, and notifications in scheduled scripts. Use absolute paths and proper environment variables in automated scripts.
DevOps Use: Automated backups, log rotation, system monitoring, deployment tasks, and maintenance operations without manual intervention.
Q266
Linux
Linux
How do you set and export environment variables?
linuxenvironmentvariables
Answer
Answer
Explanation: Environment variables: set with VAR=value (local to shell), export VAR=value (available to child processes), unset VAR (remove variable). Persistent variables go in ~/.bashrc, ~/.profile, or /etc/environment. Common variables: PATH, HOME, USER, SHELL. Use $VAR or ${VAR} to reference variables.
DevOps Use: Configuration management, application settings, API keys, and environment-specific configurations in deployment and automation scripts.
Q267
Linux
Linux
How do you check disk usage and memory usage? (df, du, free, top)
linuxmonitoringresources
Answer
Answer
Explanation: System monitoring: df shows filesystem disk usage, du shows directory space usage (du -sh for summary), free shows memory usage (free -h for human-readable), top shows real-time system resources. Examples: df -h (disk space), du -sh /var/log (log directory size), free -m (memory in MB). These tools help monitor resource utilization.
DevOps Use: Capacity planning, performance monitoring, identifying resource bottlenecks, and preventing system failures due to resource exhaustion.
Q268
Linux
Linux
What are best practices for Linux administration as a junior DevOps engineer?
linuxbest-practicesadministration
Answer
Answer
Explanation: Key practices: regular backups and testing restores, monitor system logs (/var/log/), use sudo instead of root, keep systems updated, implement proper file permissions, use configuration management tools, document changes, monitor resource usage, implement security hardening, and maintain disaster recovery procedures. Follow principle of least privilege and automate repetitive tasks.
DevOps Use: Best practices ensure system reliability, security, maintainability, and compliance in production environments while reducing operational risks.
Q269
Monitoring
Monitoring
What is monitoring in DevOps, and why is it important?
monitoringfundamentalsdevops
Answer
Answer
Explanation: Monitoring in DevOps is the continuous observation and measurement of system performance, application health, and infrastructure metrics to ensure reliability, availability, and optimal performance. It involves collecting, analyzing, and alerting on data from applications, servers, networks, and services. Monitoring enables proactive issue detection, performance optimization, capacity planning, and maintaining service level objectives (SLOs).
DevOps Use: Monitoring enables early detection of issues, reduces mean time to resolution (MTTR), supports data-driven decisions, and ensures system reliability in production environments.
Q270
Monitoring
Monitoring
What are the differences between monitoring, observability, and logging?
monitoringobservabilitylogging
Answer
Answer
Explanation: Monitoring focuses on known metrics and predefined dashboards to track system health. Observability provides deep insights into system behavior through metrics, logs, and traces, enabling understanding of unknown issues. Logging captures discrete events and messages for debugging and audit trails. Monitoring answers 'what' is happening, observability answers 'why' it's happening, and logging provides the detailed context.
DevOps Use: Use monitoring for operational health, observability for complex troubleshooting and system understanding, and logging for detailed event analysis and compliance.
Q271
Monitoring
Monitoring
What is the difference between metrics, logs, and traces?
monitoringmetricslogstraces
Answer
Answer
Explanation: Metrics are numerical measurements over time (CPU usage, response time, error rates), typically stored in time-series databases. Logs are discrete events with timestamps containing detailed information about system activities. Traces track requests through distributed systems, showing the complete journey across multiple services. Together, they form the three pillars of observability, providing complementary views of system behavior.
DevOps Use: Metrics for alerting and dashboards, logs for debugging and audit trails, traces for understanding request flows in microservices architectures.
Q272
Monitoring
Monitoring
What is a time-series database, and why is it important in monitoring?
monitoringtime-seriesdatabase
Answer
Answer
Explanation: Time-series databases are optimized for storing and querying data points indexed by time, such as metrics collected at regular intervals. They provide efficient compression, fast aggregation queries, and automatic data retention policies. Examples include InfluxDB, Prometheus, and TimescaleDB. They handle high write volumes, support downsampling, and enable complex time-based analytics essential for monitoring systems.
DevOps Use: Time-series databases store metrics efficiently, enable fast dashboard queries, support alerting rules, and provide historical analysis for capacity planning and trend analysis.
Q273
Monitoring
Monitoring
What is the difference between push-based and pull-based monitoring?
monitoringpushpullarchitecture
Answer
Answer
Explanation: Push-based monitoring has applications actively send metrics to monitoring systems (like StatsD, DataDog). Pull-based monitoring has monitoring systems scrape metrics from application endpoints (like Prometheus). Push is better for ephemeral workloads and firewalled environments; pull is better for service discovery and centralized configuration. Each has trade-offs in network topology, security, and operational complexity.
DevOps Use: Choose push for dynamic environments and batch jobs, pull for long-running services and when you need centralized metric collection control.
Q274
Monitoring
Monitoring
What are key system metrics to monitor (CPU, memory, disk, network)?
monitoringsystem-metricsperformance
Answer
Answer
Explanation: Key system metrics include: CPU utilization and load average, memory usage and swap, disk I/O and space utilization, network throughput and errors. Monitor both current values and trends. CPU load average shows system stress, memory includes buffers/cache, disk metrics include IOPS and latency, network includes bandwidth and packet loss. Set thresholds based on baseline performance and capacity limits.
DevOps Use: System metrics enable capacity planning, performance optimization, and early detection of resource exhaustion that could impact application performance.
Q275
Monitoring
Monitoring
What are application-level metrics, and why are they important?
monitoringapplication-metricsperformance
Answer
Answer
Explanation: Application-level metrics measure business and technical performance: response times, throughput (requests per second), error rates, user sessions, database query performance, and business KPIs. These metrics provide insights into user experience and application health beyond infrastructure metrics. They enable correlation between system performance and business impact, supporting SLA monitoring and user experience optimization.
DevOps Use: Application metrics enable performance optimization, user experience monitoring, business impact assessment, and correlation with infrastructure changes.
Q276
Monitoring
Monitoring
How do you set thresholds and alerts for metrics?
monitoringalertsthresholds
Answer
Answer
Explanation: Set thresholds based on baseline performance, SLA requirements, and historical data. Use static thresholds for predictable metrics and dynamic thresholds for variable patterns. Implement multi-level alerting (warning, critical) with appropriate escalation. Consider alert frequency, hysteresis to prevent flapping, and correlation rules to reduce noise. Test alerts regularly and adjust based on false positive rates.
DevOps Use: Proper thresholds enable proactive issue detection while minimizing alert fatigue, ensuring teams respond to genuine problems effectively.
Q277
Monitoring
Monitoring
What is alert fatigue, and how can it be prevented?
monitoringalert-fatiguebest-practices
Answer
Answer
Explanation: Alert fatigue occurs when teams receive too many alerts, leading to desensitization and delayed response to critical issues. Causes include low thresholds, duplicate alerts, non-actionable notifications, and lack of prioritization. Prevention strategies: meaningful thresholds, alert correlation, escalation policies, regular tuning, actionable alerts only, and proper alert routing to responsible teams.
DevOps Use: Preventing alert fatigue ensures rapid response to genuine incidents, maintains team effectiveness, and improves overall system reliability.
Q278
Monitoring
Monitoring
What is the difference between active and passive monitoring?
monitoringactivepassive
Answer
Answer
Explanation: Active monitoring proactively tests system functionality through synthetic transactions, health checks, and external probes (like ping, HTTP checks, API calls). Passive monitoring observes actual user traffic and system behavior without generating test traffic. Active monitoring detects issues before users are affected; passive monitoring shows real user experience and actual system performance under load.
DevOps Use: Use active monitoring for early issue detection and SLA validation, passive monitoring for real user experience and performance optimization.
Q279
Monitoring
Monitoring
What are popular monitoring tools for DevOps (Prometheus, Grafana, Nagios, Zabbix, ELK)?
monitoringtoolscomparison
Answer
Answer
Explanation: Popular tools include: Prometheus (metrics collection and alerting), Grafana (visualization and dashboards), Nagios (infrastructure monitoring and alerting), Zabbix (comprehensive monitoring platform), ELK Stack (logging and analysis), DataDog (cloud monitoring), New Relic (APM), and cloud-native solutions (CloudWatch, Azure Monitor). Each has strengths: Prometheus for cloud-native, Nagios for traditional infrastructure, ELK for log analysis.
DevOps Use: Choose tools based on environment type, scalability needs, budget, and integration requirements with existing infrastructure and workflows.
Q280
Monitoring
Monitoring
How does Prometheus collect metrics, and what is its architecture?
monitoringprometheusarchitecture
Answer
Answer
Explanation: Prometheus uses a pull-based model, scraping metrics from HTTP endpoints at configured intervals. Architecture includes: Prometheus server (scraping and storage), Pushgateway (for batch jobs), Alertmanager (alert handling), and exporters (metric exposure). It stores data in time-series format with powerful PromQL query language. Service discovery automatically finds targets to scrape.
DevOps Use: Prometheus provides scalable metrics collection for cloud-native applications, Kubernetes environments, and microservices architectures.
Q281
Monitoring
Monitoring
What is Grafana used for, and how does it integrate with Prometheus?
monitoringgrafanavisualization
Answer
Answer
Explanation: Grafana is a visualization platform that creates dashboards and graphs from various data sources. It integrates with Prometheus through data source configuration, using PromQL queries to fetch metrics. Grafana provides rich visualization options, alerting capabilities, dashboard templating, and user management. It supports multiple data sources simultaneously, enabling unified views across different monitoring systems.
DevOps Use: Grafana creates operational dashboards, executive reports, and custom visualizations that help teams understand system performance and business metrics.
Q282
Monitoring
Monitoring
What is the difference between metrics scraping and log shipping?
monitoringscrapinglogging
Answer
Answer
Explanation: Metrics scraping pulls numerical data from application endpoints at regular intervals (like Prometheus). Log shipping pushes log events from applications to centralized systems (like ELK stack). Scraping is pull-based, periodic, and handles structured numerical data. Shipping is push-based, event-driven, and handles unstructured text data. Each serves different monitoring needs and has different network and storage requirements.
DevOps Use: Use metrics scraping for performance monitoring and alerting, log shipping for debugging, audit trails, and detailed event analysis.
Q283
Monitoring
Monitoring
How do you monitor Kubernetes clusters?
monitoringkubernetescontainers
Answer
Answer
Explanation: Kubernetes monitoring involves multiple layers: cluster components (API server, etcd, scheduler), nodes (kubelet, resource usage), pods (application metrics, resource consumption), and services (endpoints, performance). Use tools like Prometheus Operator, kube-state-metrics for cluster state, node-exporter for node metrics, and application-specific exporters. Monitor both infrastructure and application layers.
DevOps Use: Kubernetes monitoring ensures cluster health, resource optimization, application performance, and enables auto-scaling decisions based on metrics.
Q284
Monitoring
Monitoring
What is centralized logging, and why is it important?
monitoringloggingcentralized
Answer
Answer
Explanation: Centralized logging aggregates log data from multiple sources into a single location for analysis, search, and correlation. It enables unified log management across distributed systems, provides better security and compliance, facilitates troubleshooting, and enables log-based alerting. Essential for microservices and distributed architectures where logs are scattered across many services and servers.
DevOps Use: Centralized logging enables efficient troubleshooting, security monitoring, compliance reporting, and correlation of events across distributed systems.
Q285
Monitoring
Monitoring
What are ELK (Elasticsearch, Logstash, Kibana) and EFK stacks, and how are they used?
monitoringelklogging
Answer
Answer
Explanation: ELK stack: Elasticsearch (search and analytics engine), Logstash (log processing pipeline), Kibana (visualization interface). EFK replaces Logstash with Fluentd for log collection. These stacks provide complete log management: collection, processing, storage, search, and visualization. They handle structured and unstructured log data, provide real-time search, and support complex log analysis and alerting.
DevOps Use: ELK/EFK stacks enable comprehensive log analysis, real-time monitoring, security event detection, and business intelligence from log data.
Q286
Monitoring
Monitoring
What is distributed tracing, and why is it important in microservices?
monitoringtracingmicroservices
Answer
Answer
Explanation: Distributed tracing tracks requests as they flow through multiple services in microservices architectures, creating a complete picture of request execution. Each trace contains spans representing individual operations, with timing and metadata. Tools like Jaeger, Zipkin, and AWS X-Ray provide tracing capabilities. Essential for understanding performance bottlenecks, debugging failures, and optimizing service interactions in complex distributed systems.
DevOps Use: Distributed tracing enables root cause analysis, performance optimization, and understanding of service dependencies in microservices architectures.
Q287
Monitoring
Monitoring
How do you instrument applications for monitoring (exporters, agents, SDKs)?
monitoringinstrumentationapplications
Answer
Answer
Explanation: Application instrumentation involves adding monitoring capabilities through: exporters (expose metrics endpoints), agents (collect and forward data), SDKs (application libraries for metrics/tracing), and auto-instrumentation tools. Methods include custom metrics in code, APM agents, sidecar containers, and service mesh integration. Choose based on application language, deployment model, and monitoring requirements.
DevOps Use: Proper instrumentation enables comprehensive application monitoring, performance optimization, and business metrics collection without significant code changes.
Q288
Monitoring
Monitoring
How do you correlate metrics, logs, and traces for root cause analysis?
monitoringcorrelationanalysis
Answer
Answer
Explanation: Correlation involves linking metrics, logs, and traces using common identifiers like trace IDs, timestamps, and service names. Use correlation IDs in logs, link traces to metrics through service labels, and align timestamps across data sources. Tools like Grafana, Jaeger, and observability platforms provide correlation capabilities. This unified view enables faster root cause analysis and better understanding of system behavior.
DevOps Use: Correlation enables faster incident resolution, better understanding of system behavior, and more effective troubleshooting in complex distributed systems.
Q289
Monitoring
Monitoring
What is a dashboard, and what key metrics should be visualized?
monitoringdashboardsvisualization
Answer
Answer
Explanation: Dashboards provide visual representations of key metrics and system health through charts, graphs, and alerts. Essential metrics include: system health (CPU, memory, disk), application performance (response time, error rate, throughput), business KPIs, and SLA compliance. Design principles: clear hierarchy, relevant metrics for audience, actionable information, and appropriate time ranges. Different dashboards for different audiences (operations, executives, developers).
DevOps Use: Dashboards enable quick system health assessment, performance monitoring, and data-driven decision making for operations and business teams.
Q290
Monitoring
Monitoring
How do you create meaningful visualizations and alerts in Grafana?
monitoringgrafanavisualization
Answer
Answer
Explanation: Create meaningful Grafana visualizations by: choosing appropriate chart types for data (time series for trends, gauges for current values), using proper time ranges and refresh intervals, implementing template variables for flexibility, setting meaningful thresholds and colors, and organizing panels logically. For alerts: define clear conditions, set appropriate evaluation intervals, configure notification channels, and include context in alert messages.
DevOps Use: Effective Grafana visualizations and alerts enable quick problem identification, performance trend analysis, and proactive issue resolution.
Q291
Monitoring
Monitoring
How do you use annotations in Grafana to correlate events with metrics?
monitoringgrafanaannotations
Answer
Answer
Explanation: Grafana annotations mark specific points in time on dashboards to correlate events (deployments, incidents, configuration changes) with metric changes. Create annotations manually, automatically from queries, or via API integration with deployment tools. Annotations help identify cause-and-effect relationships between operational events and system behavior, improving incident analysis and change impact assessment.
DevOps Use: Annotations enable correlation of deployments with performance changes, incident timeline visualization, and better understanding of system behavior changes.
Q292
Monitoring
Monitoring
What is the difference between real-time and historical monitoring?
monitoringreal-timehistorical
Answer
Answer
Explanation: Real-time monitoring provides immediate visibility into current system state with minimal delay (seconds to minutes), essential for alerting and operational response. Historical monitoring analyzes past data for trends, capacity planning, and performance analysis over longer periods (days to years). Real-time uses streaming data and fast queries; historical uses batch processing and data aggregation for efficiency.
DevOps Use: Real-time monitoring enables immediate incident response, while historical monitoring supports capacity planning, trend analysis, and performance optimization.
Q293
Monitoring
Monitoring
How do you handle metric retention and storage limits?
monitoringretentionstorage
Answer
Answer
Explanation: Metric retention involves balancing storage costs with data value through retention policies, data downsampling, and tiered storage. Strategies include: short-term high-resolution data, long-term aggregated data, automatic data expiration, compression, and archival to cheaper storage. Consider query patterns, compliance requirements, and storage costs when designing retention policies.
DevOps Use: Proper retention policies control storage costs while maintaining necessary data for troubleshooting, compliance, and historical analysis.
Q294
Monitoring
Monitoring
How do you ensure monitoring does not affect application performance?
monitoringperformanceoptimization
Answer
Answer
Explanation: Minimize monitoring impact through: efficient metric collection (sampling, batching), asynchronous data transmission, resource limits on monitoring agents, optimized queries and dashboards, and careful instrumentation placement. Use pull-based systems to control collection frequency, implement circuit breakers for monitoring failures, and monitor the monitoring system itself.
DevOps Use: Performance-conscious monitoring ensures observability without degrading application performance or user experience.
Q295
Monitoring
Monitoring
How do you secure monitoring systems and sensitive metrics?
monitoringsecuritycompliance
Answer
Answer
Explanation: Secure monitoring through: authentication and authorization (RBAC), encrypted data transmission (TLS), secure storage of sensitive metrics, network segmentation, audit logging, and regular security updates. Implement least privilege access, mask sensitive data in logs, use service accounts for automation, and monitor the monitoring infrastructure for security events.
DevOps Use: Secure monitoring protects sensitive operational data, ensures compliance, and prevents unauthorized access to system information.
Q296
Monitoring
Monitoring
How do you monitor cloud-native applications and microservices effectively?
monitoringcloud-nativemicroservices
Answer
Answer
Explanation: Cloud-native monitoring requires: service discovery for dynamic environments, distributed tracing for request flows, container and orchestration metrics, auto-scaling integration, and multi-cloud visibility. Use cloud-native tools (Prometheus, Jaeger), implement health checks, monitor service mesh if used, and focus on business metrics alongside infrastructure metrics.
DevOps Use: Cloud-native monitoring enables effective management of dynamic, distributed applications while maintaining visibility across complex architectures.
Q297
Monitoring
Monitoring
What are some best practices for alerting, escalation, and incident response?
monitoringalertingincident-response
Answer
Answer
Explanation: Best practices include: meaningful alert names and descriptions, appropriate severity levels, escalation policies with time-based escalation, on-call rotations, runbooks for common issues, alert correlation to reduce noise, and post-incident reviews. Implement alert acknowledgment, snoozing capabilities, and integration with incident management tools.
DevOps Use: Effective alerting and escalation ensure rapid incident response, minimize service impact, and maintain team effectiveness during operational issues.
Q298
Monitoring
Monitoring
How do you measure the success of your monitoring strategy (SLOs, SLIs, uptime)?
monitoringslosuccess-metrics
Answer
Answer
Explanation: Measure monitoring success through: Service Level Indicators (SLIs) like response time and error rate, Service Level Objectives (SLOs) defining acceptable performance, uptime/availability metrics, mean time to detection and resolution, alert accuracy (false positive rates), and business impact metrics. Regular review and adjustment based on business needs and user experience.
DevOps Use: Monitoring success metrics ensure the monitoring strategy aligns with business objectives and effectively supports service reliability goals.
Q299
Terraform
Terraform
What is Terraform, and how does it differ from other IaC tools (like CloudFormation, Ansible)?
terraformiacfundamentals
Answer
Answer
Explanation: Terraform is an open-source Infrastructure as Code (IaC) tool that uses declarative configuration files to provision and manage cloud resources. Unlike CloudFormation (AWS-specific) or ARM templates (Azure-specific), Terraform is cloud-agnostic and supports multiple providers. Compared to Ansible (configuration management), Terraform focuses on infrastructure provisioning with state management and dependency resolution.
DevOps Use: Terraform enables consistent infrastructure deployment across environments, version-controlled infrastructure changes, and automated resource provisioning in CI/CD pipelines.
Q300
Terraform
Terraform
What are the benefits of using Terraform in DevOps?
terraformbenefitsdevops
Answer
Answer
Explanation: Key benefits include: Infrastructure as Code (version control, repeatability), multi-cloud support, declarative syntax (describe desired state), state management (tracks resources), plan preview (shows changes before applying), modular design (reusable components), and extensive provider ecosystem. Terraform enables consistent deployments, reduces manual errors, and provides infrastructure versioning.
DevOps Use: Terraform streamlines infrastructure deployment, enables environment consistency, supports disaster recovery, and integrates with CI/CD for automated infrastructure changes.
Q301
Terraform
Terraform
What is the difference between Terraform plan, apply, and destroy?
terraformcommandsworkflow
Answer
Answer
Explanation: terraform plan creates an execution plan showing what actions Terraform will take without making changes (dry run). terraform apply executes the plan, creating/updating/deleting resources to match configuration. terraform destroy removes all resources managed by Terraform. Plan is for preview, apply for execution, destroy for cleanup. Always run plan before apply to review changes.
DevOps Use: Plan enables change review and approval processes, apply implements infrastructure changes, destroy supports environment cleanup and cost management.
Q302
Terraform
Terraform
What are Terraform providers, and why are they important?
terraformprovidersintegrations
Answer
Answer
Explanation: Providers are plugins that enable Terraform to interact with APIs of cloud platforms, SaaS services, and other systems. Examples: AWS, Azure, GCP, Kubernetes, GitHub, DataDog. Providers define available resources and data sources, handle authentication, and translate Terraform configurations into API calls. Each provider has its own resources, arguments, and authentication methods.
DevOps Use: Providers enable multi-cloud deployments, service integrations, and comprehensive infrastructure management across different platforms and services.
Q303
Terraform
Terraform
What is the difference between Terraform CLI and Terraform Cloud/Enterprise?
terraformclicloud
Answer
Answer
Explanation: Terraform CLI is the open-source command-line tool for local execution. Terraform Cloud/Enterprise adds collaboration features: remote state management, team workspaces, policy enforcement (Sentinel), cost estimation, private module registry, and web UI. Cloud version provides hosted execution, while CLI requires local state management and coordination.
DevOps Use: CLI for individual development and small teams, Cloud/Enterprise for team collaboration, governance, and enterprise-scale infrastructure management.
Q304
Terraform
Terraform
What are Terraform resources, data sources, and variables?
terraformresourcesconfiguration
Answer
Answer
Explanation: Resources define infrastructure components to create/manage (aws_instance, azurerm_virtual_machine). Data sources fetch information about existing infrastructure (aws_ami, azurerm_resource_group). Variables are input parameters that make configurations flexible and reusable. Resources create infrastructure, data sources query existing infrastructure, variables parameterize configurations.
DevOps Use: Resources provision infrastructure, data sources enable integration with existing resources, variables support environment-specific configurations and reusable modules.
Q305
Terraform
Terraform
What are the types of Terraform variables, and how do you pass values?
terraformvariablesconfiguration
Answer
Answer
Explanation: Variable types include: string, number, bool, list, map, object, tuple, set, and any. Pass values via: command line (-var), environment variables (TF_VAR_name), .tfvars files, terraform.tfvars (auto-loaded), or interactive prompts. Variables can have default values, descriptions, and validation rules. Use locals for computed values within configurations.
DevOps Use: Variables enable environment-specific configurations, secure credential passing, and reusable module parameters in CI/CD pipelines.
Q306
Terraform
Terraform
How do you use outputs in Terraform, and why are they useful?
terraformoutputsmodules
Answer
Answer
Explanation: Outputs expose values from Terraform configurations, making them available to other configurations, modules, or external systems. Define with output blocks, access via terraform output command or remote state data sources. Outputs can be marked sensitive to hide values. They enable sharing information between modules and configurations.
DevOps Use: Outputs share resource information between modules, provide values for CI/CD pipelines, and enable integration with other tools and configurations.
Q307
Terraform
Terraform
What is the difference between count and for_each in Terraform resources?
terraformcountfor-each
Answer
Answer
Explanation: count creates multiple instances using numeric index (0, 1, 2...), suitable for identical resources. for_each creates instances using map keys or set values, better for resources with different configurations. count is simpler but can cause issues when removing middle elements; for_each provides stable resource addresses and is more flexible for complex scenarios.
DevOps Use: Use count for simple resource multiplication, for_each for creating resources with different configurations or when resource identity matters.
Q308
Terraform
Terraform
How do you handle conditional resource creation?
terraformconditionalsresources
Answer
Answer
Explanation: Use conditional expressions with count or for_each: count = var.create_resource ? 1 : 0 creates resource based on boolean variable. Use dynamic blocks for conditional resource arguments. Combine with locals for complex conditions. Conditional creation enables environment-specific resources and feature toggles in infrastructure code.
DevOps Use: Conditional resources support different environment configurations, optional features, and cost optimization by creating resources only when needed.
Q309
Terraform
Terraform
What is a Terraform module, and why should you use them?
terraformmodulesreusability
Answer
Answer
Explanation: Modules are reusable Terraform configurations that encapsulate related resources. They promote code reuse, standardization, and abstraction. Modules have inputs (variables), outputs, and can contain multiple resources. Benefits include: reduced duplication, easier maintenance, standardized patterns, and simplified complex configurations. Use modules for common patterns like VPC setup, application stacks, or security groups.
DevOps Use: Modules enable standardized infrastructure patterns, reduce configuration duplication, and support organizational best practices across teams and projects.
Q310
Terraform
Terraform
What is the difference between a root module and a child module?
terraformmodulesarchitecture
Answer
Answer
Explanation: Root module is the main Terraform configuration in your working directory where you run terraform commands. Child modules are called by the root module or other modules using module blocks. Root module contains provider configurations and backend settings; child modules focus on specific functionality. Root module orchestrates the overall infrastructure, child modules provide reusable components.
DevOps Use: Root modules define environment-specific configurations and call child modules for standardized components, enabling modular infrastructure design.
Q311
Terraform
Terraform
How do you pass variables to modules and retrieve outputs?
terraformmodulesvariables
Answer
Answer
Explanation: Pass variables to modules using arguments in module blocks: module 'example' { source = './modules/vpc'; vpc_cidr = var.cidr }. Retrieve outputs using module.module_name.output_name syntax. Module variables are defined in the module's variables.tf, outputs in outputs.tf. This enables parameterized, reusable modules with configurable behavior.
DevOps Use: Variable passing enables customizable modules for different environments, output retrieval allows modules to share information with other parts of configuration.
Q312
Terraform
Terraform
What are the best practices for organizing Terraform code?
terraformorganizationbest-practices
Answer
Answer
Explanation: Best practices include: separate environments (dev/staging/prod), use modules for reusable components, consistent file naming (main.tf, variables.tf, outputs.tf), version control with .gitignore for sensitive files, use remote state backends, implement proper variable validation, and document modules. Organize by environment or service, not by resource type.
DevOps Use: Good organization enables team collaboration, reduces errors, simplifies maintenance, and supports scalable infrastructure management across multiple environments.
Q313
Terraform
Terraform
How do you use Terraform Registry modules safely in production?
terraformregistrymodules
Answer
Answer
Explanation: Use Terraform Registry modules by specifying version constraints, reviewing module source code, checking community ratings and maintenance status. Pin to specific versions in production, test in non-production first, understand module dependencies and outputs. Verify module security, licensing, and support status before adoption.
DevOps Use: Registry modules accelerate development with proven patterns while maintaining security and reliability through proper vetting and version management.
Q314
Terraform
Terraform
What is Terraform state, and why is it important?
terraformstatemanagement
Answer
Answer
Explanation: Terraform state is a JSON file tracking the mapping between Terraform configuration and real-world resources. It stores resource metadata, dependencies, and current state for comparison during plan/apply operations. State enables Terraform to know what exists, detect drift, and determine necessary changes. Critical for Terraform's operation and must be protected and backed up.
DevOps Use: State management enables infrastructure tracking, change detection, and team collaboration while preventing resource conflicts and data loss.
Q315
Terraform
Terraform
What is the difference between local state and remote state?
terraformstatebackends
Answer
Answer
Explanation: Local state stores terraform.tfstate file locally on the machine running Terraform. Remote state stores state in shared backends (S3, Azure Storage, GCS, Terraform Cloud). Remote state enables team collaboration, provides locking, backup, and versioning. Local state is simpler but doesn't support collaboration or provide durability guarantees.
DevOps Use: Use local state for development and learning, remote state for team environments and production to ensure collaboration and state durability.
Q316
Terraform
Terraform
How do you lock state files to prevent concurrent modifications?
terraformstatelocking
Answer
Answer
Explanation: State locking prevents multiple Terraform operations from running simultaneously on the same state. Supported backends (S3 with DynamoDB, GCS, Azure Storage) automatically handle locking. Terraform acquires lock before operations and releases after completion. Use force-unlock only in emergencies when locks are stuck due to crashes or network issues.
DevOps Use: State locking prevents corruption from concurrent operations, ensures data integrity in team environments, and provides safe CI/CD pipeline execution.
Q317
Terraform
Terraform
What are state backends, and name some common backends?
terraformbackendsstorage
Answer
Answer
Explanation: State backends determine where and how Terraform state is stored and accessed. Common backends include: S3 (with DynamoDB for locking), Azure Storage, Google Cloud Storage, Terraform Cloud, Consul, and local (default). Each backend has different features for locking, encryption, versioning, and access control. Choose based on cloud provider, team needs, and security requirements.
DevOps Use: Backends enable secure, durable state storage with appropriate access controls and collaboration features for different organizational needs.
Q318
Terraform
Terraform
How do you handle state drift or manual changes in the cloud?
terraformdriftstate
Answer
Answer
Explanation: State drift occurs when actual infrastructure differs from Terraform state due to manual changes, external tools, or console modifications. Detect drift with terraform plan (shows differences), handle by: importing changes into Terraform, reverting manual changes, or updating configuration to match reality. Use terraform refresh to update state with current resource status.
DevOps Use: Drift detection and remediation maintain infrastructure consistency, prevent configuration conflicts, and ensure Terraform remains the source of truth.
Q319
Terraform
Terraform
What are Terraform workspaces, and how do you use them for environments?
terraformworkspacesenvironments
Answer
Answer
Explanation: Workspaces allow multiple state files for the same configuration, enabling environment separation (dev, staging, prod) with shared code. Each workspace has isolated state but shares the same configuration files. Use terraform workspace commands to create, select, and manage workspaces. Access current workspace via terraform.workspace variable.
DevOps Use: Workspaces enable environment management with shared configurations, reducing code duplication while maintaining environment isolation.
Q320
Terraform
Terraform
How do you integrate Terraform with CI/CD pipelines?
terraformcicdautomation
Answer
Answer
Explanation: Integrate Terraform in CI/CD through: automated plan on pull requests, apply on merge to main branch, use service accounts for authentication, store state remotely, implement approval gates for production, and include validation/linting steps. Use tools like Atlantis, Terraform Cloud, or custom pipeline scripts for automation.
DevOps Use: CI/CD integration enables automated infrastructure deployment, change review processes, and consistent environment management across development lifecycle.
Q321
Terraform
Terraform
How do you perform plan & apply safely in pipelines?
terraformpipelinessafety
Answer
Answer
Explanation: Safe pipeline practices include: always run plan before apply, implement approval gates for production changes, use separate service accounts with minimal permissions, validate configurations with terraform validate and linting tools, implement rollback procedures, and use plan artifacts to ensure consistency between plan and apply phases.
DevOps Use: Safe pipeline practices prevent accidental infrastructure changes, ensure change review, and maintain system reliability in automated deployments.
Q322
Terraform
Terraform
How do you manage sensitive data in Terraform?
terraformsecretssecurity
Answer
Answer
Explanation: Manage secrets through: environment variables for provider credentials, external secret management systems (Vault, AWS Secrets Manager), sensitive variable marking, encrypted state backends, and avoiding hardcoded secrets in configurations. Use data sources to fetch secrets at runtime rather than storing them in code or state.
DevOps Use: Proper secret management ensures security compliance, prevents credential exposure, and enables secure automation in CI/CD pipelines.
Q323
Terraform
Terraform
How do you handle multi-cloud infrastructure in Terraform?
terraformmulti-cloudproviders
Answer
Answer
Explanation: Multi-cloud Terraform involves: using multiple providers in same configuration, organizing by cloud provider or service, managing different authentication methods, handling provider-specific resources, and considering cross-cloud networking. Use modules to abstract cloud-specific implementations and enable consistent interfaces across providers.
DevOps Use: Multi-cloud strategies enable vendor diversification, disaster recovery, cost optimization, and leveraging best-of-breed services across cloud providers.
Q324
Terraform
Terraform
How do you implement Terraform import to bring existing resources under management?
terraformimportmigration
Answer
Answer
Explanation: Terraform import brings existing infrastructure under Terraform management by mapping real resources to Terraform configuration. Process: write configuration for existing resource, run terraform import with resource address and ID, verify with terraform plan. Import only creates state mapping; you must write matching configuration manually.
DevOps Use: Import enables gradual Terraform adoption, brings legacy infrastructure under IaC management, and recovers from state issues or manual resource creation.
Q325
Terraform
Terraform
How do you handle resource dependencies explicitly or implicitly?
terraformdependenciesordering
Answer
Answer
Explanation: Terraform handles dependencies through: implicit dependencies (resource references create automatic ordering), explicit dependencies (depends_on argument for non-obvious relationships), and dependency graphs for execution planning. Terraform automatically determines creation order based on resource references and dependency declarations.
DevOps Use: Proper dependency management ensures correct resource creation order, prevents race conditions, and enables reliable infrastructure provisioning.
Q326
Terraform
Terraform
How do you detect and fix terraform plan/apply errors?
terraformdebuggingtroubleshooting
Answer
Answer
Explanation: Debug Terraform errors through: reading error messages carefully, checking provider documentation, validating syntax with terraform validate, using terraform console for expression testing, enabling detailed logging (TF_LOG), and checking resource state with terraform show. Common issues include syntax errors, authentication problems, and resource conflicts.
DevOps Use: Effective debugging reduces deployment time, prevents infrastructure issues, and improves team productivity in infrastructure management.
Q327
Terraform
Terraform
What are best practices for Terraform code review and version control?
terraformcode-reviewversion-control
Answer
Answer
Explanation: Best practices include: use version control for all Terraform code, implement pull request workflows, review plans before apply, use consistent formatting (terraform fmt), validate syntax, check for security issues, document changes, and maintain .gitignore for sensitive files. Include both code and plan review in approval process.
DevOps Use: Code review ensures quality, security, and knowledge sharing while preventing infrastructure mistakes and maintaining team standards.
Q328
Terraform
Terraform
How do you ensure Terraform scripts are idempotent and safe to run multiple times?
terraformidempotencyreliability
Answer
Answer
Explanation: Terraform is inherently idempotent due to its declarative nature and state management. Ensure idempotency by: avoiding imperative scripts in provisioners, using data sources instead of hardcoded values, implementing proper resource lifecycle management, and testing configurations multiple times. Terraform should show no changes when run against unchanged configuration.
DevOps Use: Idempotency enables safe re-runs, reliable CI/CD pipelines, and consistent infrastructure state regardless of execution frequency.