50 Kubernetes Interview Questions for Beginners (With Answers) – 2026 Edition

Last Updated: February 2026

So you’ve got a Kubernetes interview coming up? I totally get that mix of excitement and nervousness you’re probably feeling right now.

I remember my first K8s interview – I’d been playing around with containers for a few months, thought I had a pretty solid grasp on pods and deployments, and then BAM! The interviewer hit me with questions about StatefulSets and persistent volumes that I definitely wasn’t ready for. Live and learn, right?

Here’s the thing about Kubernetes interviews in 2026: they’ve evolved quite a bit. Interviewers aren’t just looking for people who can recite definitions anymore. They want to know if you actually get it – like, can you troubleshoot a failing pod? Do you understand why you’d choose one deployment strategy over another? That kind of practical knowledge.

This guide on Kubernetes Interview Questions is basically everything I wish I’d had before my first few interviews. I’ve broken down 50 of the most common Kubernetes interview questions you’ll face as a beginner, organized them by topic so they actually make sense, and included the kind of answers that’ll make interviewers nod approvingly.

Whether you’re prepping for your first DevOps role or trying to level up from traditional infrastructure work, I’ve got you covered. Let’s dive in!

Kubernetes Interview Questions

Kubernetes Fundamentals

Kubernetes Interview Questions

Let’s start with the basics. These are the questions that almost every interviewer will throw at you in the first 10 minutes. Nail these, and you’ll set a solid foundation for the rest of the conversation.

1. What is Kubernetes, and why do we use it?

Answer: Kubernetes (K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications.

Think of it this way: if Docker is like having a shipping container, Kubernetes is the entire shipping yard that manages thousands of containers – deciding where they go, making sure they’re running, replacing them if they fail, and balancing the load across all your ships (nodes).

We use Kubernetes because manually managing hundreds or thousands of containers across multiple servers would be a nightmare. K8s gives us:

  • Automatic scaling based on demand
  • Self-healing capabilities (it restarts failed containers)
  • Load balancing and service discovery
  • Automated rollouts and rollbacks
  • Storage orchestration

It’s basically the difference between manually juggling 100 balls versus having a robot that juggles them for you while you focus on strategy.

2. Explain the architecture of Kubernetes. What are the main components?

Answer: Kubernetes follows a master-worker architecture (though we now call it control plane and worker nodes to be more inclusive).

Control Plane Components:

  • API Server (kube-apiserver): The front door to Kubernetes. All commands go through here.
  • etcd: Think of this as Kubernetes’ brain – a distributed key-value store that holds all cluster data.
  • Scheduler (kube-scheduler): Decides which node should run newly created pods based on resource requirements.
  • Controller Manager: Runs various controllers that regulate the state of the cluster (making sure the desired state matches actual state).
  • Cloud Controller Manager: Handles cloud-specific logic when you’re running K8s on AWS, Azure, or GCP.

Worker Node Components:

  • kubelet: An agent on each node that makes sure containers are running as expected.
  • kube-proxy: Manages network rules and enables communication between pods.
  • Container Runtime: The underlying software that runs containers (Docker, containerd, CRI-O, etc.).

Here’s a mental model: the control plane is like the management team making decisions, while worker nodes are the actual warehouse workers executing those decisions.

3. What’s the difference between Docker and Kubernetes?

Answer: This one trips up a lot of people! Docker and Kubernetes aren’t competitors – they work together.

Docker is a containerization platform. It packages your application and all its dependencies into a container image. It’s like putting your app in a standardized box.

Kubernetes is a container orchestration platform. It manages and coordinates multiple containers across multiple machines. It’s like the logistics company that moves, tracks, and manages all those boxes.

In 2026, Kubernetes doesn’t actually require Docker anymore (it can use containerd or other runtimes directly), but the concept remains: Docker creates containers, Kubernetes orchestrates them.

Analogy time: Docker is like creating pre-cooked meals in sealed containers. Kubernetes is like the entire restaurant chain management system that decides which kitchen cooks what meal, when to deliver them, and how to handle it when a kitchen breaks down.

4. What is a Kubernetes cluster?

Answer: A Kubernetes cluster is a set of machines (physical or virtual) that run containerized applications managed by Kubernetes. It consists of at least one control plane node and one or more worker nodes.

In production, you’d typically have:

  • Multiple control plane nodes (usually 3 for high availability)
  • Many worker nodes (could be 10, 100, or even thousands depending on your scale)

The beauty of a cluster is that your applications don’t care which specific machine they’re running on. Kubernetes abstracts all that away, so you just say “run 5 copies of my app” and the cluster figures out the details.

5. What is kubectl and why is it important?

Answer: kubectl (pronounced “kube-control” or “kube-cuttle” depending on who you ask) is the command-line tool for interacting with Kubernetes clusters. It’s your primary interface for deploying applications, inspecting resources, and troubleshooting issues.

Some commands you’ll use constantly:

kubectl get pods                    # List all pods
kubectl describe pod my-pod         # Detailed info about a pod
kubectl logs my-pod                 # View pod logs
kubectl apply -f deployment.yaml    # Deploy from a manifest file
kubectl delete pod my-pod           # Delete a pod

Think of kubectl as the remote control for your Kubernetes cluster. You’ll be using it a lot, so getting comfortable with it is non-negotiable for any K8s job.


Pods and Containers

Pods and Containers

Pods are the fundamental building blocks in Kubernetes. You absolutely need to understand these inside and out.

6. What is a Pod in Kubernetes?

Answer: A Pod is the smallest deployable unit in Kubernetes. It’s a wrapper around one or more containers that share storage, network, and specifications for how to run.

Here’s what makes pods special:

  • Containers in a pod share the same IP address and port space
  • They can communicate via localhost
  • They share storage volumes
  • They’re scheduled together on the same node

Most of the time, you’ll have one container per pod (the most common pattern). But sometimes you’ll use multiple containers in a pod for things like:

  • A sidecar container that handles logging
  • An init container that sets things up before the main app starts
  • A helper container that does data syncing

Pro tip: In interviews, be ready to explain why you’d run multiple containers in one pod versus multiple separate pods. The answer usually revolves around tight coupling and shared resources.

7. What’s the difference between a Pod and a Container?

Answer: A container is the actual runtime instance of an image (like a Docker container). A pod is a Kubernetes abstraction that wraps one or more containers.

Think of it this way:

  • Container = a single process running in isolation
  • Pod = a logical host for one or more containers that need to work closely together

Kubernetes doesn’t manage containers directly; it manages pods. When you create a pod, Kubernetes then creates the containers inside it.

8. Can you run multiple containers in a single Pod? When would you do this?

Answer: Yes! This is called the “sidecar pattern” or “multi-container pod pattern.”

Common use cases:

  1. Logging sidecar: Main container runs your app, sidecar container ships logs to a centralized system
  2. Service mesh proxy: Main container runs your app, sidecar handles all network traffic (like Istio/Envoy)
  3. Init containers: Run setup tasks before the main container starts
  4. Adapter pattern: Sidecar transforms data for the main container
  5. Ambassador pattern: Sidecar proxies network connections

When NOT to use multiple containers in a pod:

  • When containers don’t need to share resources
  • When they can scale independently
  • When they don’t need localhost communication

The rule of thumb: if two containers would fail together and need to scale together, put them in the same pod. Otherwise, separate pods.

9. What is the lifecycle of a Pod?

Answer: Pods go through several phases:

  1. Pending: Pod has been accepted by Kubernetes but isn’t running yet (maybe downloading images)
  2. Running: Pod is bound to a node and at least one container is running
  3. Succeeded: All containers completed successfully and won’t be restarted
  4. Failed: All containers terminated, and at least one failed
  5. Unknown: Can’t determine pod state (usually a communication issue with the node)

There are also container states within pods:

  • Waiting: Container isn’t running yet (pulling image, waiting for init containers)
  • Running: Container is executing
  • Terminated: Container finished execution or failed

Understanding this lifecycle is crucial for debugging. When an interviewer asks “why isn’t my pod starting?” you need to know where in this lifecycle to look.

10. What are Init Containers?

Answer: Init containers are specialized containers that run before the main application containers in a pod. They run to completion sequentially.

Why use them?

  • Run setup scripts that your main container doesn’t need
  • Wait for external services to be ready
  • Populate shared volumes with data
  • Perform security or compliance checks

Example scenario: Your main app needs a database connection. The init container can wait until the database is reachable before allowing the main container to start. This prevents your app from crashing repeatedly while waiting for dependencies.

initContainers:
  - name: wait-for-db
    image: busybox
    command: ['sh', '-c', 'until nc -z db 5432; do sleep 1; done']

Key point: Init containers have their own images and can use tools that your main container doesn’t need, keeping your app container lean.


Deployments and ReplicaSets

Deployments and ReplicaSets

Now we’re getting into how you actually manage applications in production. This section is super important.

11. What is a Deployment in Kubernetes?

Answer: A Deployment is a higher-level abstraction that manages ReplicaSets and provides declarative updates to Pods. It’s probably the most common resource you’ll work with in Kubernetes.

What Deployments give you:

  • Rollout management: Deploy new versions gradually
  • Rollback capability: Quickly revert to previous versions if something breaks
  • Scaling: Easily increase/decrease replicas
  • Self-healing: Automatically replace failed pods

Think of a Deployment as your application’s desired state manager. You tell it “I want 5 replicas of version 2.0” and it makes that happen, handling all the messy details.

12. Explain the relationship between Deployments, ReplicaSets, and Pods.

Answer: This is a hierarchy question that interviewers love:

Deployment (top level) → ReplicaSet (middle) → Pods (bottom)

  • Deployment: Defines the desired state and update strategy
  • ReplicaSet: Ensures the right number of pod replicas are running
  • Pods: The actual application instances

Here’s what happens when you create a Deployment:

  1. Deployment creates a ReplicaSet
  2. ReplicaSet creates the specified number of Pods
  3. When you update the Deployment, it creates a new ReplicaSet
  4. The new ReplicaSet scales up while the old one scales down (rolling update)

You typically never create ReplicaSets directly – Deployments manage them for you. And while you can create Pods directly, you usually don’t because Deployments give you all those management features.

13. What is a ReplicaSet and how is it different from a Replication Controller?

Answer: A ReplicaSet ensures that a specified number of pod replicas are running at any given time. If a pod dies, the ReplicaSet creates a new one.

ReplicaSet vs. Replication Controller:

  • ReplicaSet is the newer version with more flexible selector options (set-based selectors)
  • Replication Controller is the older version (equality-based selectors only)

In 2026, you should always use Deployments (which create ReplicaSets) rather than using ReplicaSets or Replication Controllers directly. ReplicationControllers are basically deprecated at this point.

Pro tip: If an interviewer asks about Replication Controllers, acknowledge you know what they are but explain that in modern Kubernetes, Deployments are the way to go.

14. How do you perform a rolling update in Kubernetes?

Answer: Rolling updates happen automatically when you update a Deployment! That’s the beauty of it.

When you change the image version in your Deployment:

kubectl set image deployment/my-app my-app=my-app:v2

Kubernetes:

  1. Creates a new ReplicaSet with the new image
  2. Gradually scales up the new ReplicaSet
  3. Gradually scales down the old ReplicaSet
  4. Ensures minimum availability throughout (controlled by maxUnavailable and maxSurge)

You can control the rollout strategy:

strategy:
  type: RollingUpdate
  rollingUpdate:
    maxUnavailable: 1    # Max pods that can be unavailable
    maxSurge: 1          # Max extra pods during update

To check rollout status:

kubectl rollout status deployment/my-app

This is way better than the old days of taking down all your servers, updating them, and praying everything works when you bring them back up!

15. How do you rollback a Deployment?

Answer: Super easy! Kubernetes keeps the old ReplicaSets around for exactly this reason.

kubectl rollout undo deployment/my-app           # Rollback to previous version
kubectl rollout undo deployment/my-app --to-revision=3  # Rollback to specific version

To see rollout history:

kubectl rollout history deployment/my-app

Pro tip: By default, Kubernetes keeps the last 10 ReplicaSets. You can change this with revisionHistoryLimit in your Deployment spec.

In interviews, emphasize that rollbacks are almost instantaneous because the old ReplicaSet still exists – Kubernetes just needs to scale it back up.

16. What’s the difference between Recreate and RollingUpdate deployment strategies?

Answer: These are the two main deployment strategies:

RollingUpdate (default):

  • Gradually replaces old pods with new ones
  • Zero downtime deployment
  • Uses maxUnavailable and maxSurge to control the pace
  • Use when: You need continuous availability (most production apps)

Recreate:

  • Kills all old pods before creating new ones
  • Results in downtime
  • Simpler and faster than rolling updates
  • Use when: Your app can’t run multiple versions simultaneously, or downtime is acceptable (dev/test environments)
strategy:
  type: Recreate  # or RollingUpdate

In interviews, always mention that RollingUpdate is generally preferred for production because downtime is bad for business!

17. What is a DaemonSet?

Answer: A DaemonSet ensures that a copy of a pod runs on every node (or a subset of nodes) in your cluster. As nodes are added to the cluster, pods are automatically added to them.

Common use cases:

  • Log collectors: Running Fluentd or Filebeat on every node
  • Monitoring agents: Running Prometheus Node Exporter on every node
  • Storage daemons: Running Ceph or Gluster storage components
  • Network plugins: CNI network components

Example: You want to collect logs from every node in your cluster. Instead of manually deploying a log collector to each node, you create a DaemonSet and Kubernetes automatically ensures every node runs the collector.

Key difference from Deployments: DaemonSets ignore replicas – they always run one pod per node (or matching nodes).

18. What is a StatefulSet and when would you use it?

Answer: StatefulSets are designed for stateful applications that need stable, persistent identities and storage. Unlike Deployments where pods are interchangeable, StatefulSet pods have:

  • Stable network identities: Each pod gets a persistent hostname (my-app-0, my-app-1, etc.)
  • Stable storage: Each pod gets its own PersistentVolume that persists across restarts
  • Ordered deployment and scaling: Pods are created/deleted in a specific order

When to use StatefulSets:

  • Databases (MySQL, PostgreSQL, MongoDB)
  • Distributed systems (Kafka, ZooKeeper, Elasticsearch)
  • Applications requiring persistent data
  • Apps needing stable network identifiers

When NOT to use StatefulSets:

  • Stateless applications (use Deployments instead)
  • Apps where pods are interchangeable
  • Most web applications and microservices

Real talk: StatefulSets are more complex than Deployments. Only use them when you truly need the guarantees they provide.


Services and Networking

Services and Networking

Networking in Kubernetes can be confusing at first, but these concepts are crucial for any K8s role.

19. What is a Kubernetes Service?

Answer: A Service is an abstraction that defines a logical set of pods and a policy to access them. It provides a stable IP address and DNS name for a set of pods, even as individual pods come and go.

Why do we need Services? Pods are ephemeral – they can be created, destroyed, and replaced at any time, and each time they get new IP addresses. Services solve this by providing a stable endpoint.

Think of it like a load balancer with service discovery built in. Your frontend doesn’t need to know the specific IP of your backend pods – it just connects to the Service, which routes traffic to healthy pods.

20. Explain the different types of Services in Kubernetes.

Answer: There are four main types:

1. ClusterIP (default):

  • Exposes Service on an internal cluster IP
  • Only accessible within the cluster
  • Use case: Backend services that don’t need external access

2. NodePort:

  • Exposes Service on each node’s IP at a static port (30000-32767)
  • Makes Service accessible from outside the cluster
  • Use case: Testing, or when you don’t have a load balancer

3. LoadBalancer:

  • Creates an external load balancer (in cloud environments)
  • Gives you an external IP to access the Service
  • Use case: Production services that need external access (most common for public-facing apps)

4. ExternalName:

  • Maps Service to a DNS name
  • Returns a CNAME record
  • Use case: Creating an alias to an external service

In interviews, be ready to explain when you’d choose each type. Most production applications use ClusterIP for internal services and LoadBalancer for services that need external access.

21. What is an Ingress in Kubernetes?

Answer: An Ingress is an API object that manages external access to services, typically HTTP/HTTPS. It provides features like load balancing, SSL termination, and name-based virtual hosting.

Why use Ingress instead of LoadBalancer Services?

  • Cost: One Ingress can handle multiple services; multiple LoadBalancer Services mean multiple cloud load balancers ($$$$)
  • Features: SSL/TLS termination, path-based routing, name-based virtual hosting
  • Flexibility: More control over routing logic
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
  - host: myapp.com
    http:
      paths:
      - path: /api
        backend:
          service:
            name: api-service
      - path: /web
        backend:
          service:
            name: web-service

Important: An Ingress alone doesn’t do anything – you need an Ingress Controller (like NGINX, Traefik, or HAProxy) to actually implement the Ingress rules.

22. What is the difference between a Service and an Ingress?

Answer: This is a common point of confusion!

Service:

  • Layer 4 (TCP/UDP) load balancing
  • Distributes traffic to pods
  • Works within the cluster (ClusterIP) or exposes a single service externally (LoadBalancer)
  • No HTTP-specific features

Ingress:

  • Layer 7 (HTTP/HTTPS) routing
  • Routes traffic to multiple Services based on hostnames and paths
  • Provides SSL termination, virtual hosting, and advanced routing
  • Requires an Ingress Controller to work

Think of it this way: Services handle the “which pods?” question, while Ingress handles the “which Service?” question based on the URL.

In a typical setup: Client → Ingress → Service → Pods

23. What is DNS in Kubernetes and how does it work?

Answer: Kubernetes has built-in DNS (CoreDNS in modern clusters) that automatically creates DNS records for Services and Pods.

Service DNS naming:

  • my-service.my-namespace.svc.cluster.local
  • Within the same namespace, you can just use my-service

Example: A pod in the frontend namespace can reach the backend service at:

  • backend.default.svc.cluster.local (full name)
  • backend.default (without domain)
  • backend (if in the same namespace)

This DNS magic is why you don’t need to hardcode IP addresses in your applications. Your frontend can connect to http://backend-service:8080 and Kubernetes DNS figures out the rest.

Pro tip: When troubleshooting connectivity issues, always check if DNS is working first. Run nslookup my-service from inside a pod.

24. What is a NetworkPolicy?

Answer: A NetworkPolicy is like a firewall for your pods. It specifies how pods can communicate with each other and with external endpoints.

By default, pods accept traffic from any source. NetworkPolicies let you restrict this.

Example use case: You want your database pods to only accept connections from your backend pods, not your frontend pods.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-network-policy
spec:
  podSelector:
    matchLabels:
      app: database
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: backend

Important: NetworkPolicies only work if your CNI plugin supports them (Calico, Cilium, etc. do; basic flannel doesn’t).

25. Explain the concept of Endpoints in Kubernetes.

Answer: Endpoints are the actual IP addresses of the pods backing a Service. When you create a Service, Kubernetes automatically creates a matching Endpoints object.

How it works:

  1. You create a Service with a selector (e.g., app: backend)
  2. Kubernetes finds all pods matching that selector
  3. Kubernetes creates an Endpoints object listing the IPs of those pods
  4. When traffic hits the Service, it’s routed to these endpoint IPs

You can check endpoints with:

kubectl get endpoints my-service

Pro debugging tip: If your Service isn’t routing traffic correctly, check the Endpoints. If there are no endpoints, your Service selector isn’t matching any pods.


Storage and Volumes

Storage and Volumes

Storage is where a lot of beginners struggle because it’s conceptually different from traditional infrastructure.

26. What is a Volume in Kubernetes?

Answer: A Volume is a directory accessible to containers in a pod. It solves the problem of ephemeral container storage – when a container restarts, its file system is lost.

Key differences from Docker volumes:

  • Kubernetes Volumes have the same lifetime as the pod (not the container)
  • Volumes can be shared between containers in a pod
  • Many different volume types are available

Common volume types:

  • emptyDir: Temporary storage that exists while the pod is running
  • hostPath: Mounts a directory from the node’s filesystem
  • configMap / secret: Inject configuration data as files
  • persistentVolumeClaim: Request persistent storage

Volumes are defined in the pod spec and mounted into containers.

27. What is the difference between emptyDir and hostPath volumes?

Answer:

emptyDir:

  • Created when a pod is assigned to a node
  • Initially empty
  • Deleted when the pod is removed
  • Useful for temporary scratch space or sharing data between containers in a pod
  • Data is isolated to that specific pod

hostPath:

  • Mounts a file or directory from the host node’s filesystem
  • Data persists even after the pod is deleted
  • Useful for accessing host system files (logs, etc.)
  • Dangerous if not used carefully – pods can read/write sensitive host files
  • Pod might fail if scheduled on a different node (data isn’t portable)

When to use each:

  • emptyDir: Temporary cache, scratch space, sharing data between containers
  • hostPath: Accessing node logs, Docker socket, or other node-specific resources (rarely used in production)

28. What is a PersistentVolume (PV) and PersistentVolumeClaim (PVC)?

Answer: These are Kubernetes’ way of handling persistent storage in a more abstracted, production-ready manner.

PersistentVolume (PV):

  • A piece of storage in the cluster
  • Provisioned by an administrator or dynamically using StorageClasses
  • Independent of pod lifecycle
  • Examples: AWS EBS, GCE Persistent Disk, NFS, local disk

PersistentVolumeClaim (PVC):

  • A request for storage by a user
  • Like a “storage reservation”
  • Specifies size and access mode
  • Gets bound to a suitable PV

The workflow:

  1. Admin creates PVs (or StorageClass does it dynamically)
  2. User creates a PVC requesting storage (e.g., 10GB)
  3. Kubernetes binds the PVC to an available PV
  4. Pod references the PVC in its volume definition
  5. Pod can now use persistent storage

Analogy: PVs are like hotel rooms (available capacity), PVCs are like room reservations (requests), and pods are the guests who actually use the rooms.

29. What is a StorageClass?

Answer: A StorageClass defines different “classes” of storage and enables dynamic provisioning of PersistentVolumes.

Without StorageClass: An admin manually creates PVs, and users claim them.

With StorageClass: Users create PVCs, and Kubernetes automatically provisions PVs from the cloud provider or storage system.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp3
  iops: "3000"

Common use cases:

  • Different performance tiers (HDD vs SSD)
  • Different replication policies
  • Different backup policies

Example: You might have StorageClasses named standard, fast, and archival, each backed by different underlying storage with different performance and cost characteristics.

30. What are the different access modes for volumes?

Answer: Access modes define how a volume can be mounted:

ReadWriteOnce (RWO):

  • Volume can be mounted read-write by a single node
  • Multiple pods on the same node can use it
  • Most common for block storage (AWS EBS, GCE Persistent Disk)

ReadOnlyMany (ROX):

  • Volume can be mounted read-only by many nodes
  • Useful for shared configuration or read-only data

ReadWriteMany (RWX):

  • Volume can be mounted read-write by many nodes
  • Requires shared filesystem (NFS, CephFS, GlusterFS)
  • Not supported by most cloud block storage

ReadWriteOncePod (RWOP):

  • Volume can be mounted read-write by a single pod (added in recent Kubernetes versions)
  • Stricter than RWO

Common gotcha: If you try to use AWS EBS (which is RWO) with a deployment that might schedule pods on different nodes, you’ll have issues. Use RWX storage or redesign your architecture.


Configuration and Secrets

Configuration and Secrets

Managing configuration without hardcoding values is crucial for production systems.

31. What is a ConfigMap?

Answer: A ConfigMap is an API object used to store non-confidential configuration data in key-value pairs. It decouples configuration from container images, making your applications more portable.

Ways to use ConfigMaps:

  1. Environment variables
  2. Command-line arguments
  3. Configuration files in a volume
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  database_url: "postgres://db:5432"
  log_level: "info"
  feature_flags: "new-ui=true,beta=false"

Using in a pod:

containers:
- name: app
  env:
  - name: DATABASE_URL
    valueFrom:
      configMapKeyRef:
        name: app-config
        key: database_url

Pro tip: ConfigMaps are great for configuration that differs between environments (dev, staging, prod). You keep the same container image but swap ConfigMaps.

32. What is a Secret and how is it different from a ConfigMap?

Answer: Secrets are similar to ConfigMaps but designed for sensitive data like passwords, tokens, and keys.

Key differences:

  • Secrets are base64 encoded (not encrypted, just encoded!)
  • Secrets can be encrypted at rest (if configured)
  • Kubernetes treats Secrets with more care (not logged, more access restrictions)
  • Smaller size limit (1MB)

Important: Base64 encoding is NOT encryption! Anyone with access to a Secret can decode it. For true security, use external secret management (HashiCorp Vault, AWS Secrets Manager) or enable encryption at rest.

apiVersion: v1
kind: Secret
metadata:
  name: db-secret
type: Opaque
data:
  username: YWRtaW4=       # base64 encoded "admin"
  password: cGFzc3dvcmQ=   # base64 encoded "password"

When to use what:

  • ConfigMap: Non-sensitive config (URLs, feature flags, app settings)
  • Secret: Passwords, API keys, certificates, OAuth tokens

33. How do you inject a Secret into a container?

Answer: There are two main ways:

1. As environment variables:

containers:
- name: app
  env:
  - name: DB_PASSWORD
    valueFrom:
      secretKeyRef:
        name: db-secret
        key: password

2. As files in a volume:

containers:
- name: app
  volumeMounts:
  - name: secret-volume
    mountPath: /etc/secrets
volumes:
- name: secret-volume
  secret:
    secretName: db-secret

Best practice: Use volume mounts for secrets when possible. Environment variables can be accidentally exposed through logs or system information. Volume mounts are slightly more secure.

34. What are some best practices for managing Secrets?

Answer: This is where you can really show maturity in your answer:

  1. Don’t commit secrets to version control – Use tools like sealed-secrets or external secret managers
  2. Use RBAC to restrict who can access secrets
  3. Enable encryption at rest in etcd
  4. Rotate secrets regularly – Automate this process
  5. Use external secret management for production (Vault, AWS Secrets Manager, Azure Key Vault)
  6. Prefer volume mounts over environment variables
  7. Limit secret scope – Use namespace-specific secrets
  8. Audit secret access – Monitor who’s accessing what
  9. Use sealed secrets or SOPS for GitOps workflows

In interviews, mentioning external secret managers (especially in 2026 when they’re standard) shows you understand production requirements.

35. What is the difference between Opaque, kubernetes.io/tls, and kubernetes.io/dockerconfigjson secret types?

Answer: Kubernetes has different secret types for different use cases:

Opaque (default):

  • Generic secrets for any type of data
  • Most flexible
  • You define the key-value pairs

kubernetes.io/tls:

  • Specifically for TLS certificates and keys
  • Requires tls.crt and tls.key keys
  • Used by Ingress controllers for HTTPS

kubernetes.io/dockerconfigjson:

  • For storing Docker registry credentials
  • Used by kubelet to pull private images
  • Automatically created when you use kubectl create secret docker-registry

kubernetes.io/service-account-token:

  • Service account tokens
  • Usually auto-generated by Kubernetes

There are others, but these are the most common. The type mostly serves as documentation and validation – it helps Kubernetes ensure the secret has the expected structure.


Troubleshooting and Best Practices

This is where interviewers separate people who’ve actually used Kubernetes from those who’ve just read about it.

36. How would you troubleshoot a pod that’s not starting?

Answer: I follow a systematic approach:

Step 1: Check pod status

kubectl get pods
kubectl describe pod <pod-name>

Look at the Events section – it usually tells you what’s wrong.

Common issues and solutions:

ImagePullBackOff:

  • Wrong image name or tag
  • Private registry without credentials
  • Network issues pulling the image
  • Solution: Check image name, verify credentials, check network

CrashLoopBackOff:

  • Application is crashing on startup
  • Solution: Check logs with kubectl logs <pod-name>

Pending:

  • Not enough resources (CPU/memory)
  • No nodes match nodeSelector/affinity
  • PersistentVolumeClaim not bound
  • Solution: Check events, verify resources available

CreateContainerConfigError:

  • Missing ConfigMap or Secret
  • Solution: Verify all referenced configs exist

Step 2: Check logs

kubectl logs <pod-name>
kubectl logs <pod-name> --previous  # logs from previous container if crashed

Step 3: Get into the container (if it’s running)

kubectl exec -it <pod-name> -- /bin/bash

Pro tip: I always check the describe output first – 80% of the time, the Events section tells you exactly what’s wrong.

37. What are some common reasons for a pod to be in CrashLoopBackOff status?

Answer: CrashLoopBackOff means the container keeps crashing, and Kubernetes is backing off on restart attempts.

Common causes:

  1. Application errors:
    • Bugs in the code
    • Uncaught exceptions
    • Failed health checks
  2. Missing dependencies:
    • Can’t connect to database
    • Missing environment variables
    • Missing ConfigMaps or Secrets
  3. Configuration issues:
    • Wrong command or arguments
    • Invalid configuration file
    • Permissions problems
  4. Resource constraints:
    • Out of memory (OOMKilled)
    • Not enough CPU
  5. Failed startup probes:
    • Application takes too long to start
    • Probe configured incorrectly

How to debug:

kubectl logs <pod-name>                    # Current logs
kubectl logs <pod-name> --previous        # Previous container logs
kubectl describe pod <pod-name>           # Check restart count and exit codes

Exit code hints:

  • Exit code 137: Container killed by OOM
  • Exit code 1: General application error
  • Exit code 0: Container exited successfully (check if it’s supposed to run continuously)

38. How do you check resource utilization in Kubernetes?

Answer: Several ways depending on what you need:

For pods and nodes:

kubectl top nodes          # CPU and memory usage per node
kubectl top pods           # CPU and memory usage per pod
kubectl top pods -A        # All namespaces

Note: This requires Metrics Server to be installed.

For detailed resource requests and limits:

kubectl describe node <node-name>    # Shows allocated vs available resources
kubectl describe pod <pod-name>      # Shows container resource requests/limits

Using monitoring tools:

  • Prometheus + Grafana (industry standard in 2026)
  • Cloud provider tools (CloudWatch, Azure Monitor, GCP Monitoring)
  • Kubernetes Dashboard
  • Third-party tools (Datadog, New Relic, Dynatrace)

Pro tip: In production, you should always have proper monitoring set up. kubectl top is great for quick checks, but you need historical data and alerting for real operations.

39. What are resource requests and limits in Kubernetes?

Answer: These define how much CPU and memory a container needs and can use.

Requests:

  • Guaranteed resources for a container
  • Used by scheduler to find suitable nodes
  • If a node doesn’t have enough available resources, pod won’t be scheduled there

Limits:

  • Maximum resources a container can use
  • If container exceeds memory limit, it gets OOMKilled
  • If container exceeds CPU limit, it gets throttled (not killed)
containers:
- name: app
  resources:
    requests:
      memory: "128Mi"
      cpu: "250m"       # 250 millicores = 0.25 CPU
    limits:
      memory: "256Mi"
      cpu: "500m"

Best practices:

  • Always set requests (helps scheduler)
  • Set limits to prevent resource hogging
  • Limits should be >= requests
  • Monitor actual usage and adjust accordingly
  • Consider QoS classes (Guaranteed, Burstable, BestEffort)

Common mistake: Setting limits way higher than requests leads to overcommitment. If all pods suddenly use their limits, nodes can run out of resources.

40. What are liveness and readiness probes?

Answer: These are health checks that help Kubernetes manage your containers.

Liveness Probe:

  • Checks if the container is still running
  • If probe fails, Kubernetes kills and restarts the container
  • Use case: Detect deadlocks, infinite loops, unrecoverable failures

Readiness Probe:

  • Checks if the container is ready to serve traffic
  • If probe fails, pod is removed from Service endpoints (no traffic sent to it)
  • Use case: Application is starting up, warming up cache, waiting for dependencies

Startup Probe (added in newer versions):

  • Used for slow-starting containers
  • Disables liveness/readiness checks until startup succeeds
  • Use case: Legacy applications with long initialization
livenessProbe:
  httpGet:
    path: /healthz
    port: 8080
  initialDelaySeconds: 30
  periodSeconds: 10

readinessProbe:
  httpGet:
    path: /ready
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 5

Pro tip: Liveness probes should check if the app is alive. Readiness probes should check if the app is ready to serve traffic. Don’t make them check the same thing!

41. What are labels and selectors in Kubernetes?

Answer: Labels are key-value pairs attached to objects (pods, nodes, services, etc.). Selectors are used to query and filter objects based on labels.

Labels:

metadata:
  labels:
    app: backend
    environment: production
    version: v2.0

Why labels matter:

  • Organize and select resources
  • Service selectors use labels to find pods
  • Deployments use labels to manage ReplicaSets
  • You use labels for filtering (kubectl get pods -l app=backend)

Selectors:

Equality-based:

kubectl get pods -l environment=production
kubectl get pods -l environment!=development

Set-based:

kubectl get pods -l 'environment in (production, staging)'
kubectl get pods -l 'tier notin (frontend, backend)'

Best practices:

  • Use meaningful, consistent labels
  • Common labels: app, version, environment, component, tier
  • Labels are not unique – multiple objects can have the same labels
  • Use annotations for non-identifying metadata

42. What’s the difference between labels and annotations?

Answer: Both are metadata, but they serve different purposes.

Labels:

  • For identifying and selecting objects
  • Used by selectors (Services, Deployments, etc.)
  • Short, structured values
  • Examples: app=nginx, env=prod, version=1.0

Annotations:

  • For non-identifying metadata
  • Not used for selection
  • Can be longer, can include JSON
  • Examples: build timestamp, contact info, documentation links, tool-specific config
metadata:
  labels:
    app: web
    env: prod
  annotations:
    description: "Main web application"
    build-timestamp: "2026-02-13T10:30:00Z"
    contact: "team-backend@company.com"
    prometheus.io/scrape: "true"

Rule of thumb: If you need to select/filter objects, use labels. If it’s just informational or for tools, use annotations.

43. What are namespaces in Kubernetes and when would you use them?

Answer: Namespaces provide a way to divide cluster resources between multiple users, teams, or projects. They’re like virtual clusters within a physical cluster.

Default namespaces:

  • default: Where resources go if you don’t specify a namespace
  • kube-system: For Kubernetes system components
  • kube-public: Readable by all users, mostly for cluster information
  • kube-node-lease: For node heartbeat data

When to use namespaces:

  • Multiple teams sharing a cluster
  • Separating environments (dev, staging, prod) in the same cluster
  • Resource quotas per team/project
  • Different access controls per namespace
  • Organizing microservices by domain

Best practices:

  • Don’t go overboard – too many namespaces get hard to manage
  • Use RBAC to control access per namespace
  • Set resource quotas to prevent one namespace from consuming all resources
  • Use meaningful names (team-backend, project-payments, env-staging)
kubectl get pods -n my-namespace          # Get pods in specific namespace
kubectl get pods -A                       # Get pods in all namespaces

44. How do you secure a Kubernetes cluster?

Answer: Security is a huge topic, but here are the key areas:

1. RBAC (Role-Based Access Control):

  • Limit who can do what in the cluster
  • Principle of least privilege
  • Use Roles and RoleBindings

2. Network Policies:

  • Control pod-to-pod communication
  • Implement micro-segmentation
  • Default deny, explicit allow

3. Pod Security:

  • Don’t run containers as root
  • Use read-only filesystems when possible
  • Drop unnecessary capabilities
  • Use Pod Security Standards/Admission

4. Secret Management:

  • Encrypt secrets at rest
  • Use external secret managers (Vault, etc.)
  • Rotate credentials regularly

5. Image Security:

  • Scan images for vulnerabilities
  • Use trusted registries
  • Implement image signing and verification

6. API Server Security:

  • TLS for all communication
  • Authentication and authorization
  • Audit logging
  • Restrict access (not publicly accessible)

7. Updates:

  • Keep Kubernetes and components updated
  • Patch CVEs promptly

8. Runtime Security:

  • Use tools like Falco for runtime threat detection
  • Monitor for anomalous behavior

In interviews, you don’t need to know everything, but show you understand that security is multi-layered and ongoing.

45. What is RBAC in Kubernetes?

Answer: Role-Based Access Control (RBAC) regulates access to Kubernetes resources based on roles assigned to users or service accounts.

Key concepts:

Role / ClusterRole:

  • Defines permissions (what actions on what resources)
  • Role: Namespace-scoped
  • ClusterRole: Cluster-wide

RoleBinding / ClusterRoleBinding:

  • Grants permissions to users/groups/service accounts
  • RoleBinding: Grants role in a namespace
  • ClusterRoleBinding: Grants cluster role across all namespaces
# Role: Can read pods in default namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]

# RoleBinding: Give user Jane the pod-reader role
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
subjects:
- kind: User
  name: jane
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Common verbs: get, list, watch, create, update, patch, delete

Pro tip: Always use RBAC in production. Default admin access for everyone is a security nightmare.


Advanced Concepts (Quick Overview)

Advanced Concepts (Quick Overview)

These might come up in more technical interviews or for roles requiring deeper Kubernetes knowledge.

46. What is a Job in Kubernetes?

Answer: A Job creates one or more pods and ensures a specified number complete successfully. Unlike Deployments (which run continuously), Jobs run to completion.

Use cases:

  • Batch processing
  • Data migration
  • One-time tasks
  • Scheduled reports
apiVersion: batch/v1
kind: Job
metadata:
  name: data-migration
spec:
  completions: 1      # Run until 1 successful completion
  parallelism: 1      # Number of pods to run in parallel
  template:
    spec:
      containers:
      - name: migrator
        image: data-migrator:v1
      restartPolicy: Never

Key point: Pods from a Job don’t get restarted like Deployment pods. If a Job pod fails, it creates a new pod (up to a backoff limit).

47. What is a CronJob?

Answer: A CronJob creates Jobs on a repeating schedule (like Unix cron). It’s perfect for periodic tasks.

Use cases:

  • Nightly backups
  • Scheduled reports
  • Cleaning up old data
  • Periodic health checks
apiVersion: batch/v1
kind: CronJob
metadata:
  name: backup-job
spec:
  schedule: "0 2 * * *"    # At 2 AM every day
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: backup
            image: backup-tool:v1
          restartPolicy: OnFailure

Cron format: minute hour day month weekday

Pro tip: Always set concurrencyPolicy (Allow, Forbid, or Replace) to control what happens if a job runs long and overlaps with the next scheduled run.

48. What is a Horizontal Pod Autoscaler (HPA)?

Answer: HPA automatically scales the number of pods based on observed metrics (CPU, memory, or custom metrics).

How it works:

  1. HPA periodically checks metrics (default every 15 seconds)
  2. Compares current metric value to target value
  3. Calculates desired replicas
  4. Updates the Deployment/ReplicaSet
kubectl autoscale deployment my-app --cpu-percent=70 --min=2 --max=10

Or via YAML:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

Requirements:

  • Metrics Server must be installed
  • Pods must have resource requests defined

Pro tip: In 2026, custom metrics (like request rate, queue length) are common. Don’t just rely on CPU/memory.

49. What is a Vertical Pod Autoscaler (VPA)?

Answer: VPA automatically adjusts CPU and memory requests/limits for containers based on actual usage. Unlike HPA (which adds more pods), VPA makes existing pods bigger or smaller.

Use cases:

  • Applications with unpredictable resource needs
  • Right-sizing resources automatically
  • Reducing waste in over-provisioned pods

Important: VPA requires pod restart to apply new resources (in most modes), so it’s not suitable for all applications.

In 2026, VPA is less commonly used than HPA because scaling out (more pods) is generally preferred over scaling up (bigger pods) for resilience.

50. What is Helm and why would you use it?

Answer: Helm is a package manager for Kubernetes – think of it like apt/yum for K8s. It uses “charts” (packages of pre-configured Kubernetes resources) to deploy applications.

Why use Helm:

  • Reusability: Package applications for easy deployment
  • Versioning: Track releases and rollback if needed
  • Templating: Parameterize YAML files (no copy-paste for different environments)
  • Dependency management: Charts can depend on other charts
  • Community charts: Thousands of pre-built charts for popular apps

Example:

helm install my-postgres bitnami/postgresql

This one command deploys a production-ready PostgreSQL instance with StatefulSets, Services, Secrets, etc.

Helm 3 improvements (current in 2026):

  • No Tiller (server-side component removed)
  • Secrets as default storage
  • Improved upgrade strategy
  • Release names can be reused

When to use Helm:

  • Deploying complex applications with many resources
  • Managing multiple environments
  • Sharing applications across teams
  • When you need version control for deployments

Frequently Asked Questions

Q: How long does it take to learn Kubernetes for an interview?

A: If you’re starting from scratch, give yourself at least 2-3 months of hands-on practice. You can learn the theory in a few weeks, but interviews test practical knowledge. Set up a local cluster (minikube or kind), deploy some applications, break things, and fix them. That’s how you really learn.

Q: Do I need to know YAML perfectly for a Kubernetes interview?

A: Not perfectly, but you should be comfortable reading and writing basic YAML. Know the structure of common resources (Pods, Deployments, Services). Most interviewers won’t make you write YAML from scratch, but they might ask you to spot errors or explain what a manifest does.

Q: What’s more important: knowing kubectl commands or understanding concepts?

A: Concepts, hands down. Anyone can Google kubectl commands. Understanding why you’d use a StatefulSet vs a Deployment, or when to use ClusterIP vs LoadBalancer – that’s what separates candidates. That said, knowing basic kubectl commands shows you’ve actually used Kubernetes.

Q: Should I learn Kubernetes directly or Docker first?

A: Learn Docker basics first. Understand what containers are, how to build images, how they’re different from VMs. Then Kubernetes will make way more sense because you’ll understand what it’s orchestrating. You don’t need to be a Docker expert, but you should understand container fundamentals.

Q: How do I practice Kubernetes without spending money on cloud resources?

A: Great question! You have several free options:

  • Minikube: Local single-node cluster (most popular)
  • Kind: Kubernetes in Docker (fast, disposable)
  • K3s: Lightweight Kubernetes (great for Raspberry Pi or low-resource machines)
  • Play with Kubernetes: Free browser-based environment
  • Killercoda: Interactive Kubernetes scenarios (highly recommended)

Q: What if I forget the answer to a question during the interview?

A: Be honest! Say something like “I’m not entirely sure, but here’s how I’d approach finding the answer.” Then describe your troubleshooting process. Interviewers often care more about your problem-solving approach than memorized answers. Plus, nobody knows everything – Kubernetes is huge.

Q: Are certifications worth it for Kubernetes roles?

A: In 2026, CKA (Certified Kubernetes Administrator) and CKAD (Certified Kubernetes Application Developer) are well-respected. They prove hands-on skills because they’re practical exams. BUT – certifications alone won’t get you the job. Real projects and experience matter more. If you’re just starting out, focus on hands-on practice first, then consider certification.

Q: How much Linux knowledge do I need for Kubernetes interviews?

A: Solid fundamentals are important. You should understand:

  • Basic commands (ls, cd, ps, grep, etc.)
  • File permissions
  • Environment variables
  • Process management
  • Basic networking (ports, IP addresses)
  • SSH and file transfers

You don’t need to be a Linux guru, but Kubernetes runs on Linux, so basic comfort is expected.

Q: What’s the biggest mistake beginners make in Kubernetes interviews?

A: Trying to sound more knowledgeable than they are. If you’ve only used Kubernetes in local development, don’t claim production experience. Interviewers can tell. Instead, be honest about your experience level and show enthusiasm to learn. I’ve hired people who admitted knowledge gaps but demonstrated strong problem-solving skills over “experts” who couldn’t admit when they didn’t know something.


Conclusion

Whew! That was a lot, right? But here’s the truth: you don’t need to memorize all 50 of these answers word-for-word. What you need is to understand the concepts so you can explain them in your own words.

The best Kubernetes candidates I’ve interviewed haven’t been the ones with perfect textbook answers. They’ve been the ones who could:

  • Explain concepts clearly without jargon
  • Relate ideas to real-world problems
  • Admit when they don’t know something
  • Demonstrate problem-solving skills
  • Show genuine curiosity and passion for the technology

My advice? Pick 5-10 of these questions, deploy actual resources in a local cluster, break them, fix them. That hands-on experience is worth more than reading a hundred blog posts.

A few final tips for your interview:

Before the interview:

  • Set up a local cluster and practice
  • Deploy a real application (not just nginx)
  • Read your resume and be ready to discuss any Kubernetes experience
  • Prepare questions to ask them (about their infrastructure, challenges, etc.)

During the interview:

  • Take a breath before answering – it’s okay to think
  • Ask clarifying questions if something’s unclear
  • Walk through your thought process out loud
  • Use analogies to explain complex concepts
  • Be honest about experience gaps

After the interview:

  • Send a thank-you email
  • Reflect on what went well and what to improve
  • If you didn’t get the job, ask for feedback

Remember: every Kubernetes expert started as a beginner. I certainly bombed my first K8s interview, and look at me now – writing guides for others! You’ve got this.

The fact that you’ve made it to the end of this 18-minute read tells me you’re serious about learning Kubernetes. That dedication will take you far. Keep practicing, stay curious, and don’t be afraid to break things in your test environment – that’s where the real learning happens.

Good luck with your interview! You’re going to do great. 🚀


About the Author

Kedar Salunkhe

DevOps Engineer | Seven years of fixing things that break at 2am

Kubernetes • OpenShift • AWS • Coffee
I’ve spent almost 7 years keeping production systems running, often when everyone else is asleep. These days I’m working with Kubernetes and OpenShift deployments, automating everything that can be automated, and occasionally remembering to document the things I fix. When I’m not troubleshooting clusters, I’m probably trying out new DevOps tools or explaining to someone why we can’t just “restart everything” as a debugging strategy. You can usually find me where the coffee is strong and the error logs are confusing.


Additional Resources

Want to dive deeper into Kubernetes? Here are some resources I personally recommend:

Official Documentation

Hands-On Learning

Tools to Know

  • kubectl: Official CLI (you’ll use this daily)
  • k9s: Terminal UI for Kubernetes (game-changer for productivity)
  • Helm: Package manager for Kubernetes
  • Lens: Desktop GUI for Kubernetes (great for visualization)
  • Octant: Web-based Kubernetes dashboard

Internal Resources (If on your DevOps journey)


Did this guide help you? Drop a comment below with the question you found most challenging, or share your own Kubernetes interview experience. Let’s learn together!

Preparing for an interview? Bookmark this page and revisit the sections you find tricky. And hey, you’ve got this! 💪

Leave a Comment