Kubernetes Volume Permission Denied Errors (Root Causes & Fixes)

Last Updated: January 2026

It was 11 PM on a Thursday when I got the Slack notification Production pods failing. The error message? Simple and infuriating:

sh: can't create /data/files.txt: Permission denied

The pod was running. The volume was mounted. Everything looked right. But the application couldn’t write to its own data directory.

I spent three hours that night learning everything about Kubernetes volume permission denied error the hard way. Turns out, UID 1000 in my container didn’t match the volume’s ownership (UID 0). Classic beginner mistake that somehow made it to production.

If you’re getting “kubernetes volume permission denied” errors with volumes, or your pods mysteriously can’t read their own storage, you’re not alone. Volume permissions in Kubernetes are surprisingly tricky – especially when security contexts, fsGroup settings, and access modes all interact.

Let me save you those three hours (and the embarrassment of a midnight production issue).

In this article Kubernetes volume permission denied error, I tried to explain the permission errors and their fixes from the real-world production scenario’s.

The Basics: Why Kubernetes Volume Permission Denied Error Are Confusing

kubernetes volume permission denied

Before we fix errors, let’s understand the kubernetes volume permission denied error and what’s actually happening.

When you mount a volume in Kubernetes:

  1. Volume exists with certain ownership (UID/GID)
  2. Container runs as a specific user
  3. Linux checks: “Does this user have permission to access this path?”

If any of these don’t align, you get “kubernetes volume permission denied.”

Three things control access:

  • Volume ownership – Who owns the files (UID/GID)
  • Container user – Who the app runs as
  • SecurityContext – Kubernetes settings that modify permissions

Seems simple. But wait until you add access modes, fsGroup, SELinux, and cloud provider quirks.

1. ReadWriteOnce Volume Mounted on Multiple Nodes

The Error

$ kubectl describe pod app-xyz
Events:
  Warning  FailedAttachVolume  Multi-Attach error for volume "pvc-abc123"
  Volume is already exclusively attached to one node and can't be attached to another

Your deployment has 3 replicas, but only one pod starts. The others hang forever.

What ReadWriteOnce Actually Means

ReadWriteOnce (RWO) = One node at a time can mount this volume.

Not “one pod” – one node. If two pods are on the same node, they can both use an RWO volume. If they’re on different nodes? Game over.

When This Happens to You

# This won't work with RWO volumes!
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
spec:
  replicas: 3  # Pods will spread across nodes
  template:
    spec:
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: my-pvc  # RWO volume!

I learned this the hard way when scaling our app from 1 to 5 replicas. First pod started fine. The other four? Stuck.

How to Fix It

Option 1: Use StatefulSet (if each pod needs its own data)

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: webapp
spec:
  replicas: 3
  volumeClaimTemplates:  # Each pod gets its own PVC/volume
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 10Gi

Option 2: Use ReadWriteMany storage (if pods share data)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-data
spec:
  accessModes:
    - ReadWriteMany  # Multiple nodes can mount
  storageClassName: efs-sc  # Must support RWX!
  resources:
    requests:
      storage: 10Gi

Option 3: Pin pods to same node (not recommended)

spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            app: webapp
        topologyKey: kubernetes.io/hostname

Check What You Have

# Check PVC access mode
kubectl get pvc my-pvc -o jsonpath='{.spec.accessModes}'

# Check how many nodes pods are on
kubectl get pods -o wide | grep app

2. ReadWriteMany Not Supported

The Error

$ kubectl describe pvc shared-storage
Events:
  Warning  ProvisioningFailed  failed to provision volume: 
  requested access mode ReadWriteMany is not supported

Your PVC sits in Pending forever because your storage doesn’t support RWX.

What Most People Don’t Know

Not all storage types support ReadWriteMany:

Don’t support RWX:

  • AWS EBS
  • GCP Persistent Disk
  • Azure Disk
  • Most block storage

Do support RWX:

  • AWS EFS
  • GCP Filestore
  • Azure Files
  • NFS servers
  • CephFS
  • GlusterFS

Real-World Scenario

I once tried to scale a PHP application with session storage on an EBS-backed PVC. Set access mode to ReadWriteMany. PVC stayed Pending for 30 minutes before I realized EBS doesn’t support it.

The Fix

Check what your StorageClass supports:

# Check StorageClass details
kubectl get sc standard -o yaml

# Look at the provisioner
provisioner: kubernetes.io/aws-ebs  # EBS = no RWX support

Option 1: Switch to RWX-capable storage

# For AWS - use EFS instead of EBS
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-storage
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc  # EFS supports RWX
  resources:
    requests:
      storage: 20Gi

Option 2: Use ReadWriteOnce with StatefulSet

If you don’t actually need shared storage:

# Each pod gets its own volume
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: app
spec:
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]  # Works with EBS

Option 3: Use external shared storage

For things like uploads or session storage:

# Use S3, Redis, or external NFS instead of volumes

Quick Check

# See all StorageClasses and their capabilities
kubectl get sc -o custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner

# Test if RWX works
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rwx-test
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: standard
  resources:
    requests:
      storage: 1Gi
EOF

# Check if it binds
kubectl get pvc rwx-test
# If Pending = RWX not supported

3. Permission Denied Inside Container

The Error

$ kubectl logs app-pod
Error: EACCES: permission denied, open '/data/file.txt'

or

$ kubectl exec -it app-pod -- touch /data/test
touch: cannot touch '/data/test': Permission denied

The classic. Volume mounted successfully, but the app can’t write to it.

Why This Happens

Your container runs as UID 1000, but the volume is owned by UID 0 (root). Linux says “nope.”

Debug the Problem

# Check what user your container runs as
kubectl exec -it app-pod -- id
uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)

# Check volume ownership
kubectl exec -it app-pod -- ls -la /data
drwxr-xr-x 2 root root 4096 Jan 10 10:00 /data
# ↑ Owned by root (UID 0), your app is UID 1000 - problem!

The Fix: Use fsGroup

apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec:
  securityContext:
    fsGroup: 1000  # Make volume accessible to GID 1000
  containers:
  - name: app
    image: myapp:latest
    securityContext:
      runAsUser: 1000
      runAsGroup: 1000
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: app-pvc

After setting fsGroup: 1000:

$ kubectl exec -it app-pod -- ls -la /data
drwxrwsr-x 2 root 1000 4096 Jan 10 10:00 /data
#              ↑ GID is now 1000, setgid bit set
securityContext:
  runAsUser: 0  # Run as root - works but security risk

Alternative: Change Volume Ownership with Init Container

initContainers:
- name: fix-permissions
  image: busybox
  command: ['sh', '-c', 'chown -R 1000:1000 /data']
  volumeMounts:
  - name: data
    mountPath: /data
  securityContext:
    runAsUser: 0  # Init container runs as root to fix permissions

4. fsGroup Not Applied

The Problem

You set fsGroup: 1000 but it doesn’t work. Volume still owned by root.

securityContext:
  fsGroup: 1000  # You set this

# But inside the pod:
$ ls -la /data
drwxr-xr-x 2 root root 4096 Jan 10 10:00 /data  # Still root!

Why fsGroup Doesn’t Always Work

1. Volume type doesn’t support fsGroup

Some volume types ignore fsGroup:

  • HostPath volumes
  • Local volumes
  • Some CSI drivers

2. fsGroup set at wrong level

# Wrong - at container level
containers:
- name: app
  securityContext:
    fsGroup: 1000  # This doesn't work here!

# Right - at pod level
spec:
  securityContext:
    fsGroup: 1000  # Must be here!

3. Existing files already have wrong ownership

fsGroup only affects new files. Existing files keep their ownership.

How I Debug This

# Check if fsGroup is actually set
kubectl get pod app-pod -o jsonpath='{.spec.securityContext.fsGroup}'
# Should return: 1000

# Check if CSI driver supports fsGroup
kubectl get csidriver ebs.csi.aws.com -o yaml | grep -i fsgroup

# Check actual volume permissions
kubectl exec -it app-pod -- ls -la /data

The Fix

For new volumes:

apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec:
  securityContext:
    fsGroup: 1000        # At pod level!
    fsGroupChangePolicy: "OnRootMismatch"  # Only fix root directory
  containers:
  - name: app
    image: myapp:latest
    securityContext:
      runAsUser: 1000
      runAsGroup: 1000

For existing volumes with wrong permissions:

initContainers:
- name: fix-ownership
  image: busybox
  command:
  - sh
  - -c
  - |
    chown -R 1000:1000 /data
    chmod -R g+rw /data
  volumeMounts:
  - name: data
    mountPath: /data
  securityContext:
    runAsUser: 0  # Needs root to chown

For volume types that don’t support fsGroup:

# Use init container (only option)
initContainers:
- name: permission-fix
  image: busybox
  command: ['sh', '-c', 'chmod 777 /data']  # Or more restrictive
  volumeMounts:
  - name: data
    mountPath: /data

5. SecurityContext Misconfiguration

Common Mistakes I’ve Seen

Mistake 1: Wrong nesting level

# This doesn't work
containers:
- name: app
  image: myapp:latest
  fsGroup: 1000  # Not a container-level setting!

# Should be:
spec:
  securityContext:
    fsGroup: 1000

Mistake 2: Conflicting settings

spec:
  securityContext:
    runAsNonRoot: true
  containers:
  - name: app
    securityContext:
      runAsUser: 0  # Conflicts! Can't be root with runAsNonRoot

Mistake 3: Missing runAsGroup

securityContext:
  runAsUser: 1000
  # Missing runAsGroup - defaults to 0 (root)!
  # Should be:
  runAsGroup: 1000

The Right Configuration

apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec:
  securityContext:
    # Pod-level security
    fsGroup: 2000              # GID for volume ownership
    runAsNonRoot: true         # Don't allow root
    seccompProfile:
      type: RuntimeDefault
  containers:
  - name: app
    image: myapp:latest
    securityContext:
      # Container-level security
      runAsUser: 1000          # UID to run as
      runAsGroup: 2000         # GID to run as
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true  # Good security practice
      capabilities:
        drop:
        - ALL
    volumeMounts:
    - name: data
      mountPath: /data

Debug SecurityContext Issues

# Check what user container actually runs as
kubectl exec -it app-pod -- id

# Should show:
uid=1000 gid=2000 groups=2000

# Check pod security context
kubectl get pod app-pod -o yaml | grep -A 10 securityContext

# Check if pod is rejected by policy
kubectl get events | grep -i "pod security"

6. Root User Cannot Write to Volume

The Confusing Error

$ kubectl exec -it app-pod -- whoami
root

$ kubectl exec -it app-pod -- touch /data/file
touch: cannot touch '/data/file': Permission denied

# Wait, root can't write? What?

Even root gets kubernetes volume permission denied. This broke my brain the first time I saw it.

Why This Happens

Reason 1: ReadOnlyRootFilesystem

securityContext:
  readOnlyRootFilesystem: true  # Root filesystem is read-only
  # This includes mounted volumes!

Reason 2: Volume mounted read-only

volumeMounts:
- name: data
  mountPath: /data
  readOnly: true  # Explicitly read-only

Reason 3: PVC has ReadOnly access mode

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-pvc
spec:
  accessModes:
    - ReadOnlyMany  # Read-only!

Reason 4: SELinux blocking (on OpenShift/RHEL)

# SELinux context mismatch
ls -laZ /data
drwxr-xr-x. root root system_u:object_r:container_file_t:s0 data

The Fix

Check readOnly settings:

# Check volumeMount
kubectl get pod app-pod -o yaml | grep -A 5 volumeMounts

# Check PVC access mode
kubectl get pvc data-pvc -o jsonpath='{.spec.accessModes}'

Fix volumeMount:

volumeMounts:
- name: data
  mountPath: /data
  readOnly: false  # Or just omit this line

Fix readOnlyRootFilesystem:

securityContext:
  readOnlyRootFilesystem: true  # Keep this
  
# Mount writable volumes explicitly
volumeMounts:
- name: data
  mountPath: /data  # This is writable
- name: tmp
  mountPath: /tmp   # Temporary storage
  
volumes:
- name: tmp
  emptyDir: {}      # Writable temp space

Fix PVC:

spec:
  accessModes:
    - ReadWriteOnce  # Change from ReadOnlyMany

7. SELinux Volume Labeling Issues (OpenShift)

The OpenShift/RHEL Special

If you’re on OpenShift or RHEL with SELinux enabled, you might see:

$ kubectl logs app-pod
Error: EACCES: permission denied, open '/data/config.json'

$ kubectl exec -it app-pod -- ls -laZ /data
drwxrwxr-x. root root system_u:object_r:container_file_t:s0 data
# That SELinux context might be wrong

Why SELinux Complicates Things

SELinux adds another layer of access control beyond standard Unix permissions. Even if UID/GID are correct, SELinux context must also match.

Check if SELinux is the Problem

# Check if SELinux is enabled
kubectl exec -it app-pod -- getenforce
# If returns "Enforcing", SELinux is active

# Check SELinux denials
kubectl exec -it app-pod -- cat /var/log/audit/audit.log | grep denied

# On the node (if you have access)
sudo ausearch -m avc -ts recent

The Fix: Set SELinux Options

apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec:
  securityContext:
    seLinuxOptions:
      level: "s0:c123,c456"  # Set SELinux level
  containers:
  - name: app
    image: myapp:latest
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: app-pvc
securityContext:
  privileged: true  # Disables SELinux restrictions
  # Only use for debugging!

Alternative: Set SELinux Label on PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-data
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  mountOptions:
    - context="system_u:object_r:container_file_t:s0"  # Set SELinux context
  nfs:
    server: nfs-server.example.com
    path: /exports/data

OpenShift Specific

kubernetes volume permission denied

OpenShift usually handles this automatically with SCCs (Security Context Constraints):

# Check SCC
oc describe scc restricted

# If issues persist, might need different SCC
oc adm policy add-scc-to-user anyuid -z default

Quick Troubleshooting Checklist

When you get permission denied:

# 1. Check what user the container runs as
kubectl exec -it <pod> -- id

# 2. Check volume ownership
kubectl exec -it <pod> -- ls -la /mount/path

# 3. Check if fsGroup is set
kubectl get pod <pod> -o jsonpath='{.spec.securityContext.fsGroup}'

# 4. Check access mode
kubectl get pvc <pvc> -o jsonpath='{.spec.accessModes}'

# 5. Check for read-only settings
kubectl get pod <pod> -o yaml | grep -i readonly

# 6. Check for SELinux issues (RHEL/OpenShift)
kubectl exec -it <pod> -- ls -laZ /mount/path

# 7. Try as root (debugging only)
kubectl exec -it <pod> -- su - root -c 'touch /mount/path/test'

The Universal Fix (That Actually Works)

When nothing else works, this init container fixes 90% of permission issues:

apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec:
  initContainers:
  - name: volume-permissions
    image: busybox
    command:
    - sh
    - -c
    - |
      echo "Fixing volume permissions..."
      chown -R 1000:1000 /data
      chmod -R 775 /data
      echo "Done!"
    volumeMounts:
    - name: data
      mountPath: /data
    securityContext:
      runAsUser: 0  # Init container runs as root
  containers:
  - name: app
    image: myapp:latest
    securityContext:
      runAsUser: 1000
      runAsGroup: 1000
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: app-pvc

This init container:

  • Runs as root (can change any permissions)
  • Fixes ownership to match your app’s UID/GID
  • Runs before your app starts
  • Only runs once per pod start

I use this pattern in almost every stateful application I deploy.

Best Practices I Follow Now

After dealing with permission issues for years:

1. Always Set Both UID and GID

securityContext:
  runAsUser: 1000
  runAsGroup: 1000  # Don't forget this!
  fsGroup: 1000

2. Match UIDs Between Dockerfile and SecurityContext

# In Dockerfile
USER 1000:1000
# In pod spec
securityContext:
  runAsUser: 1000
  runAsGroup: 1000

3. Use Init Containers for Permission Fixes

Don’t try to fix permissions in the main container. Use an init container that runs as root.

4. Test with a Simple Pod First

apiVersion: v1
kind: Pod
metadata:
  name: permission-test
spec:
  securityContext:
    fsGroup: 1000
  containers:
  - name: test
    image: busybox
    command: ['sh', '-c', 'while true; do sleep 30; done']
    securityContext:
      runAsUser: 1000
      runAsGroup: 1000
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: test-pvc

5. Document Your UID/GID Choices

# Comment in your manifests
securityContext:
  runAsUser: 1000  # Matches 'appuser' in Dockerfile
  fsGroup: 1000    # Required for volume access

Common Patterns That Work

Pattern 1: Standard Web Application

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
spec:
  template:
    spec:
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
      initContainers:
      - name: init-permissions
        image: busybox
        command: ['sh', '-c', 'chmod 775 /data && chown 1000:2000 /data']
        volumeMounts:
        - name: data
          mountPath: /data
        securityContext:
          runAsUser: 0
      containers:
      - name: web
        image: webapp:latest
        securityContext:
          runAsUser: 1000
          runAsGroup: 2000
          allowPrivilegeEscalation: false
        volumeMounts:
        - name: data
          mountPath: /app/data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: webapp-pvc

Pattern 2: Database with StatefulSet

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
spec:
  serviceName: postgres
  template:
    spec:
      securityContext:
        fsGroup: 999  # postgres group
      containers:
      - name: postgres
        image: postgres:15
        securityContext:
          runAsUser: 999   # postgres user
          runAsGroup: 999
        volumeMounts:
        - name: data
          mountPath: /var/lib/postgresql/data
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 10Gi

Pattern 3: Shared Storage for Multiple Pods

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-uploads
spec:
  accessModes:
    - ReadWriteMany  # Must use RWX-capable storage
  storageClassName: efs-sc
  resources:
    requests:
      storage: 100Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  replicas: 5
  template:
    spec:
      securityContext:
        fsGroup: 1000
      containers:
      - name: app
        image: myapp:latest
        securityContext:
          runAsUser: 1000
          runAsGroup: 1000
        volumeMounts:
        - name: uploads
          mountPath: /app/uploads
      volumes:
      - name: uploads
        persistentVolumeClaim:
          claimName: shared-uploads

Final Thoughts

Kubernetes Volume permission denied in Kubernetes are deceptively complex. You’d think “mount a volume, write to it” would be simple, but:

  • Access modes limit where volumes can be used
  • Container users must match volume ownership
  • fsGroup only works with certain volume types
  • SELinux adds another layer on RHEL/OpenShift
  • Cloud providers have their own quirks

After years of fighting these issues, here’s what I’ve learned:

Always set:

  1. runAsUser and runAsGroup in container securityContext
  2. fsGroup in pod securityContext
  3. Use init containers to fix existing permissions

Always check:

  1. What UID your container runs as (id command)
  2. What owns the volume (ls -la /path)
  3. Whether they match

When in doubt:

  • Start with RWO unless you truly need shared storage
  • Use StatefulSet for per-pod storage
  • Test with a simple busybox pod first

The 11 PM production incident taught me to test permissions in dev. Now I always create a test pod with the same securityContext before deploying. Saves hours of debugging and prevents those embarrassing midnight Slack notifications about kubernetes volume permission denied.

Hit a kubernetes volume permission denied error I didn’t cover? Drop it in the comments – I’m always learning new ways storage can break.

Additional Resources

Keywords: kubernetes volume permission denied, readwriteonce multiple nodes, readwritemany not supported, fsgroup not working kubernetes, kubernetes security context, selinux kubernetes volumes, kubernetes permission denied root, kubernetes volume access modes, kubernetes storage permissions, kubernetes volume permission denied error

Leave a Comment