Why Won’t My Pod Start? Fixing Kubernetes Volume Mount Errors (ContainerCreating, FailedMount, Permission Denied)

Last Updated: January 10 2026

It’s Tuesday morning. You just deployed your app to the production. The deployment shows success, but your pod has been stuck in “ContainerCreating” for more tha 10 minutes. You check the events and see:

Warning  FailedMount  MountVolume.SetUp failed for volume "data": 
permission denied

Sound familiar? I’ve been there a lot of times than I care to admit, and I’m guessing that’s why you’re here.

Kubernetes volume mount errors are one of the most common reasons a pod always gets stuck in ContainerCreating state. Errors like the MountVolume.SetUp failed, permission denied, and unable to attach or mount volumes can be a major blocker for deployments even when PVCs are bound and YAML is correct.

Common Kubernetes Volume Mount Errors


  • Pod stuck in ContainerCreating
  • MountVolume.SetUp failed
  • Permission denied while mounting volume
  • Unable to attach or mount volumes
  • Read-only file system error
  • SubPath directory not found
  • Multi-Attach volume error

Let me walk you through the 12 of the most common volume mount errors I’ve encountered and how to fix them quickly.

1. Pod Stuck in ContainerCreating State

This is one of the common Kubernetes volume mount errors scenarios where “nothing is happening”. Your pod shows:

$ kubectl get pods
NAME                    READY   STATUS              RESTARTS   AGE
myapp-7d9f8b625-xyz     0/1     ContainerCreating   0          25m

What’s Actually Happening

The pod is waiting for the volumes to be mounted before it can start the container. Something in the volume mounting process is been stuck.

Quick debugging

# Check what is wrong
kubectl describe pod myapp-7d9f3b6c5-xyz

# Look at the Events section

Common messages from the event section:

  • “Unable to attach or mount volumes”
  • “MountVolume.SetUp failed”
  • “timeout expired waiting for volumes to attach”

Common Causes

1. PVC doesn’t exist or isn’t bound

# Check if PVC exists and is bound to pod
kubectl get pvc

# If it shows Pending, that's your problem
# Fix the PVC first (Check my PV PVC troubleshooting guide)

2. Volume plugin are not available on node

# Check if CSI driver pods are running
kubectl get pods -n kube-system | grep csi

# If missing, install the required CSI driver

3. Node doesn’t have access to the torage

# For cloud volumes, check if node has proper IAM role
# For NFS, check if node can reach NFS server
kubectl get nodes -o wide

The Fix That Works most of the time

# Delete the stuck pod (deployment will recreate it)
kubectl delete pod myapp-7d9fb6c5-xyz

# If PVC is the issue, check its status
kubectl describe pvc my-pvc

# Make sure the volume is actually available
kubectl get pv

2. MountVolume.SetUp Failed

This error appears when the Kubernetes cannot prepare the volume for mounting.

What You’ll See

Warning  FailedMount  MountVolume.SetUp failed for volume "config-volume": 
configmap "app-config" not found

or

Warning  FailedMount  MountVolume.SetUp failed for volume "secret-volume": 
secret "app-secret" not found

The Issue

The volume is referencing something (ConfigMap, Secret, PVC) that does not exist or is in the wrong namespace.

How I Fix This

For ConfigMaps:

# Check if ConfigMap exists or not
kubectl get configmap app-config

# If not found, create it
kubectl create configmap app-config --from-file=config.yaml

# Make sure it's in the same namespace as your pod
kubectl get configmap app-config -n your-namespace

For Secrets:

# Check if Secret exists
kubectl get secret app-secret

# Create if missing
kubectl create secret generic app-secret --from-literal=key=value

For PVCs:

# Verify PVC exists and is bound
kubectl get pvc my-data

# If Pending, fix the PVC issue first

3. Unable to Attach or Mount Volumes

This is usually a cloud provider issue or CSI driver issue.

Error Message

Warning  FailedAttachVolume  AttachVolume.Attach failed for volume 
"pvc-x2z": rpc error: code = Unknown

Common Scenarios

Scenario 1: Volume in different availability zone

This killed me during a deployment once. My node was in us-east-2a, but the EBS volume was in us-east-2b.

# Check volume zone
kubectl get pv pvc-xyz -o yaml | grep zone

# Check node zone
kubectl get node node-name -o yaml | grep zone

Fix:

- Use WaitForFirstConsumer in StorageClass
volumeBindingMode: WaitForFirstConsumer

- Or constrain pod to same zone as volume
nodeSelector:
  topology.kubernetes.io/zone: us-east-2a

Scenario 2: Volume already attached to the nother node

This happens with ReadWriteOnce volumes when a pod moves to different node.

# Check volume attachments
kubectl get volumeattachment | grep pvc-x1z

# Force detach if it’s stuck 
kubectl delete volumeattachment <attachment-name>

Scenario 3: Node reached max volume limit

AWS nodes have limits on EBS volumes (typically 39 per node).

# Check how many volumes are attached to node
kubectl get volumeattachment | grep node-name | wc -l

# If at limit, scale to different node or remove unused volumes

4. Timeout Expired Waiting for Volumes

Pod events show:

Warning  FailedMount  Unable to attach or mount volumes: 
timeout expired waiting for volumes to attach or mount

Why This Happens

The attach operation is taking too long – usually because the storage backend is slow or having issues.

My Debugging Process

Step 1: Check CSI driver logs

# For AWS EBS
kubectl logs -n kube-system -l app=ebs-csi-controller --tail=30

# Look for errors about API rate limits, permissions, or timeouts

Step 2: Check the cloud provider API limits

# For AWS, check CloudWatch for throttling
# Volume attach operations might be rate limited

Step 3: Check if the storage backend is healthy

# For NFS
kubectl exec -it debug-pod -- ping nfs-servers-ip

# For cloud storage, check providers dashboard

Quick Fix

# Delete pod to retry
kubectl delete pod stuck-pod-name

# Check if node can access storage
kubectl describe node node-name | grep -A 20 "Allocated resources"

# Scale deployment to force pod to different node
kubectl scale deployment myapp --replicas=0
kubectl scale deployment myapp --replicas=1

5. Mount Failed: Permission Denied

This one frustrated me for hours when I first encountered it on the production cluster.

Error Message

Warning  FailedMount  MountVolume.SetUp failed: 
mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs server:/path /var/lib/kubelet/pods/.../volumes/
Output: mount.nfs: access denied by server

The Problem

Your pod does not have the right permissions to access the volume. Usually a UID/GID mismatch or filesystem permissions issue.

Real Solutions

1: Fix the securityContext

apiVersion: v1
kind: Pod
metadata:
  name: myapp
spec:
  securityContext:
    fsGroup: 1000      # Set this to your app's GID
    runAsUser: 1000    # Set this to your app's UID
  containers:
  - name: app
    image: myapp:latest
    volumeMounts:
    - name: data
      mountPath: /data

2: For NFS volumes

volumes:
- name: nfs-volume
  nfs:
    server: nfs-server.example.com
    path: /exported/path
    readOnly: false

Then check NFS server exports:

# On NFS server
cat /etc/exports

# Should have something like:
/exported/path *(rw,sync,no_subtree_check,no_root_squash)

3: For PVCs, check PV access mode

kubectl get pv -o custom-columns=NAME:.metadata.name,MODE:.spec.accessModes

# Make sure it's not ReadOnly when you need ReadWrite

Kubernetes Volume Permission Debugging Checklist

# 1. Check what user the container runs as
kubectl exec -it pod-name -- id

# 2. Check volume permissions
kubectl exec -it pod-name -- ls -la /data

# 3. Check if fsGroup is set
kubectl get pod pod-name -o yaml | grep -A 5 securityContext

6. Mount Failed: No Such File or Directory

Error Message

MountVolume.SetUp failed: mount failed: exit status 32
Output: mount: /var/lib/kubelet/pods/.../volumes/...: 
special device /dev/xvdf does not exist

or

Error: failed to create subPath directory: 
mkdir /var/lib/kubelet/pods/.../volumes/app-data/logs: 
no such file or directory

What’s Wrong

Either the device doesn’t exist, the path is wrong, or you’re trying to create a subPath in a non-existent directory.

Common Causes & Fixes

Cause 1: Device is not attached yet

Wait a bit – volume might still be attached:

# Check if volume is attached
kubectl get volumeattachment

# Give it 3-4 minutes, then check the pod again

Cause 2: Wrong subPath in the volumeMount

# This will fail if the 'logs' directory doesn't exist in the volume
volumeMounts:
- name: data
  mountPath: /app/logs
  subPath: logs  # Directory must already exist!

Fix: Create the directory first or don’t use the subPath:

# Option 1: Remove the subPath
volumeMounts:
- name: data
  mountPath: /app/logs

# Option 2: Use init container to create directory
initContainers:
- name: create-dirs
  image: busybox
  command: ['sh', '-c', 'mkdir -p /data/logs']
  volumeMounts:
  - name: data
    mountPath: /data

Cause 3: HostPath pointing to the non-existent directory

volumes:
- name: host-data
  hostPath:
    path: /data/app  # This directory must exist on node!
    type: Directory

Check on the node:

kubectl debug node/node-name -it --image=busybox
ls -la /data/app

7. Mount Failed: Read-Only File System

Error You’ll See

Warning  FailedMount  MountVolume.SetUp failed: 
mount failed: exit status 1
Output: mount: /var/...: cannot mount read-only

or your app logs show:

Error: cannot write to /data: read-only file system

The Problem

The volume is mounted as read-only, but your app needs write access.

How to Fix

Check 1: PV access mode

kubectl get pv -o yaml | grep -A 3 accessModes
# If it shows ReadOnlyMany, you cannot write to it

Check 2: volumeMount has readOnly setting

# Make sure this isn't set to true
volumeMounts:
- name: data
  mountPath: /data
  readOnly: false  # Should be false or omitted

Check 3: PVC has ccess mode

# PVC should request ReadWriteOnce or ReadWriteMany
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-data
spec:
  accessModes:
    - ReadWriteOnce  # Not ReadOnlyMany!

Check 4: Filesystem itself might be entirely corrupted

# Exec into pod
kubectl exec -it pod-name -- sh

# Try to create a file
touch /data/test.txt

# If it fails with read-only error, filesystem might be remounted ro
# This happens when there are disk errors

Fix for corrupt filesystem:

# Delete pod to unmount volume
kubectl delete pod pod-name

# Check the olume health in cloud provider console
# May need to create the snapshot and restore to new volume

8. Volume Already Mounted

Error Message

Multi-Attach error for volume "pvc-x2z": 
Volume is already exclusively attached to one of the node and can't be attached to another

Why This Happens

You’re using a ReadWriteOnce (RWO) volume, and it’s already attached to a different node. Common when:

  • Pod is being rescheduled
  • Deployment has multiple replicas
  • Previous pod is stuck terminating

The Fix

Option 1: Wait for the older pod to fully terminate

# Check if the old pod is still terminating
kubectl get pods | grep Terminating

# If stuck, then force delete it
kubectl delete pod old-pod-name --force --grace-period=0

# New pod should mount volume successfully now

Option 2: Use ReadWriteMany if you need the multi-pod access

# Change the PVC to use RWX-capable storage (like EFS, NFS)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-data
spec:
  accessModes:
    - ReadWriteMany  # Allows multiple pods
  storageClassName: efs-scs  # Must support RWX

Option 3: Don’t scale the deployments with RWO volumes

# For StatefulSets with RWO volumes
apiVersion: apps/v1
kind: StatefulSet  # Use this instead of Deployment
metadata:
  name: app
spec:
  replicas: 1  # Each pod gets its own volume
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]

9. Volume Not Found

Error Message

Warning  FailedMount  Unable to attach or mount volumes: 
volume "data-volume" not found

The Problem

The volume definition in your pod spec doesn’t exist or is misnamed.

Quick Fix

Check your pod YAML:

spec:
  containers:
  - name: app
    volumeMounts:
    - name: data-volume  # Must match volume name below!
      mountPath: /data
  volumes:
  - name: data-volume    # Names must be match exactly!
    persistentVolumeClaim:
      claimName: my-pvc

Common Mistakes

Typo in the olume name:

volumeMounts:
- name: data-volume  # Uppercase V
volumes:
- name: data-volume  # Lowercase v - won't match!

Missing volume definition:

# You have volumeMount but no volume!
volumeMounts:
- name: config
  mountPath: /config
# No volumes: section at all - will fail

Wrong indentation:

# This is wrong
containers:
- name: app
  volumes:  # Indented too far!
  - name: data
    persistentVolumeClaim:
      claimName: pvc

# Should be:
containers:
- name: app
volumes:  # At spec level
- name: data

10. Failed to Create the SubPath Directory

Error Message

Error: failed to create subPath directory for volumeMount "data" of container "app":
mkdir /var/lib/kubelet/pods/.../data/logs: no such file or directory

What’s Happening

You’re using subPath in your volumeMount, but subdirectory doesn’t exist in the volume.

The Problem

volumeMounts:
- name: data
  mountPath: /app/logs
  subPath: logs  # 'logs' directory doesn't exist in the volume!

Solution 1: Create the Directory First

apiVersion: v1
kind: Pod
metadata:
  name: myapp
spec:
  initContainers:
  - name: init-dirs
    image: busybox
    command: ['sh', '-c', 'mkdir -p /data/logs /data/config /data/cache']
    volumeMounts:
    - name: data
      mountPath: /data
  containers:
  - name: app
    image: myapp:latest
    volumeMounts:
    - name: data
      mountPath: /app/logs
      subPath: logs  # Now it exists!
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: app-data

Solution 2: Don’t Use the SubPath

# Instead of mounting subdirectories separately
volumeMounts:
- name: data
  mountPath: /app/logs
  subPath: logs
- name: data
  mountPath: /app/config
  subPath: config

# Just mount the whole volume
volumeMounts:
- name: data
  mountPath: /app/data
# Then use /app/data/logs, /app/data/config in your app

Solution 3: Use the subPathExpr for Dynamic Paths

volumeMounts:
- name: data
  mountPath: /app/logs
  subPathExpr: $(POD_NAME)/logs
env:
- name: POD_NAME
  valueFrom:
    fieldRef:
      fieldPath: metadata.name

Quick Troubleshooting Checklist

When you hit a volume mount error:

# 1. Check pod events (this solves 80% of issues)
kubectl describe pod <pod-name> | grep -A 10 Events

# 2. Check if PVC is bound
kubectl get pvc

# 3. Check if volume exists
kubectl get pv

# 4. Check CSI driver status
kubectl get pods -n kube-system | grep csi

# 5. Check volume attachments
kubectl get volumeattachment

# 6. Check pod security context
kubectl get pod <pod-name> -o yaml | grep -A 10 securityContext

# 7. Try deleting and recreating pod
kubectl delete pod <pod-name>

When All Else Fails

If you’ve tried everything:

  1. Check node disk space
kubectl describe node <node-name> | grep -i disks
  1. Check kubelet logs on the node
kubectl debug node/<node-name> -it --image=ubuntus
journalctl -u kubelet -n 100
  1. Restart the kubelet (careful in the production!)
systemctl restart kubelet
  1. Create a minimal test pod
apiVersion: v1
kind: Pod
metadata:
  name: mount-test
spec:
  containers:
  - name: test
    image: busyboxes
    command: ['sleep', '3500']
    volumeMounts:
    - name: test-vol
      mountPath: /datas
  volumes:
  - name: test-vols
    emptyDir: {}  # Start simple

Final Thoughts

Volume mount errors look scary, but they usually boil down to:

  • Permissions related to the wrong UID/GID/fsGroup
  • Timing related to volume not ready yet
  • Configuration related to typos, wrong paths, missing directories
  • Infrastructure related to CSI driver issues, zone mismatches

The key is reading the error message carefully. Kubernetes usually tells you exactly what’s wrong – you just need to know where to look.

Start with kubectl describe pod, check the Events section, and work from there. Most issues resolve in 5 minutes once you find the actual error message.

And remember – if your pod has been “ContainerCreating” for more than 5-6 minutes, something is definitely wrong. Don’t wait around hoping it’ll fix itself.

Additional References

Quick Commands References

Kubernetes Storge Commands References
# Essential debugging commands
kubectl describe pod <pod-name>
kubectl get pvc
kubectl get pv  
kubectl get volumeattachment
kubectl logs -n kube-system -l app=ebs-csi-controller
kubectl get events --sort-by='.lastTimestamp'


# Force fixes
kubectl delete pod <pod-name> --force --grace-period=0
kubectl delete volumeattachment <attachment-name>

# Check permissions
kubectl exec -it <pod-name> -- id
kubectl exec -it <pod-name> -- ls -la /mount/path

Have a volume mount error that’s not covered here? Please Drop it in the comments and let’s figure it out together what is missing.

You can also check my recent blogposts from the Blog Feed of my website.

Leave a Comment