Last Updated: January 2026
It was 11 PM on a Thursday when I got the Slack notification Production pods failing. The error message? Simple and infuriating:
sh: can't create /data/files.txt: Permission denied
The pod was running. The volume was mounted. Everything looked right. But the application couldn’t write to its own data directory.
I spent three hours that night learning everything about Kubernetes volume permission denied error the hard way. Turns out, UID 1000 in my container didn’t match the volume’s ownership (UID 0). Classic beginner mistake that somehow made it to production.
If you’re getting “kubernetes volume permission denied” errors with volumes, or your pods mysteriously can’t read their own storage, you’re not alone. Volume permissions in Kubernetes are surprisingly tricky – especially when security contexts, fsGroup settings, and access modes all interact.
Let me save you those three hours (and the embarrassment of a midnight production issue).
In this article Kubernetes volume permission denied error, I tried to explain the permission errors and their fixes from the real-world production scenario’s.
Table of Contents
The Basics: Why Kubernetes Volume Permission Denied Error Are Confusing
Before we fix errors, let’s understand the kubernetes volume permission denied error and what’s actually happening.
When you mount a volume in Kubernetes:
- Volume exists with certain ownership (UID/GID)
- Container runs as a specific user
- Linux checks: “Does this user have permission to access this path?”
If any of these don’t align, you get “kubernetes volume permission denied.”
Three things control access:
- Volume ownership – Who owns the files (UID/GID)
- Container user – Who the app runs as
- SecurityContext – Kubernetes settings that modify permissions
Seems simple. But wait until you add access modes, fsGroup, SELinux, and cloud provider quirks.
1. ReadWriteOnce Volume Mounted on Multiple Nodes
The Error
$ kubectl describe pod app-xyzEvents:Warning FailedAttachVolume Multi-Attach error for volume "pvc-abc123"Volume is already exclusively attached to one node and can't be attached to another
Your deployment has 3 replicas, but only one pod starts. The others hang forever.
What ReadWriteOnce Actually Means
ReadWriteOnce (RWO) = One node at a time can mount this volume.
Not “one pod” – one node. If two pods are on the same node, they can both use an RWO volume. If they’re on different nodes? Game over.
When This Happens to You
# This won't work with RWO volumes!apiVersion: apps/v1kind: Deploymentmetadata:name: webappspec:replicas: 3 # Pods will spread across nodestemplate:spec:volumes:- name: datapersistentVolumeClaim:claimName: my-pvc # RWO volume!
I learned this the hard way when scaling our app from 1 to 5 replicas. First pod started fine. The other four? Stuck.
How to Fix It
Option 1: Use StatefulSet (if each pod needs its own data)
apiVersion: apps/v1kind: StatefulSetmetadata:name: webappspec:replicas: 3volumeClaimTemplates: # Each pod gets its own PVC/volume- metadata:name: dataspec:accessModes: ["ReadWriteOnce"]resources:requests:storage: 10Gi
Option 2: Use ReadWriteMany storage (if pods share data)
apiVersion: v1kind: PersistentVolumeClaimmetadata:name: shared-dataspec:accessModes:- ReadWriteMany # Multiple nodes can mountstorageClassName: efs-sc # Must support RWX!resources:requests:storage: 10Gi
Option 3: Pin pods to same node (not recommended)
spec:affinity:podAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchLabels:app: webapptopologyKey: kubernetes.io/hostname
Check What You Have
# Check PVC access modekubectl get pvc my-pvc -o jsonpath='{.spec.accessModes}'# Check how many nodes pods are onkubectl get pods -o wide | grep app
2. ReadWriteMany Not Supported
The Error
$ kubectl describe pvc shared-storageEvents:Warning ProvisioningFailed failed to provision volume:requested access mode ReadWriteMany is not supported
Your PVC sits in Pending forever because your storage doesn’t support RWX.
What Most People Don’t Know
Not all storage types support ReadWriteMany:
Don’t support RWX:
- AWS EBS
- GCP Persistent Disk
- Azure Disk
- Most block storage
Do support RWX:
- AWS EFS
- GCP Filestore
- Azure Files
- NFS servers
- CephFS
- GlusterFS
Real-World Scenario
I once tried to scale a PHP application with session storage on an EBS-backed PVC. Set access mode to ReadWriteMany. PVC stayed Pending for 30 minutes before I realized EBS doesn’t support it.
The Fix
Check what your StorageClass supports:
# Check StorageClass detailskubectl get sc standard -o yaml# Look at the provisionerprovisioner: kubernetes.io/aws-ebs # EBS = no RWX support
Option 1: Switch to RWX-capable storage
# For AWS - use EFS instead of EBSapiVersion: v1kind: PersistentVolumeClaimmetadata:name: shared-storagespec:accessModes:- ReadWriteManystorageClassName: efs-sc # EFS supports RWXresources:requests:storage: 20Gi
Option 2: Use ReadWriteOnce with StatefulSet
If you don’t actually need shared storage:
# Each pod gets its own volumeapiVersion: apps/v1kind: StatefulSetmetadata:name: appspec:volumeClaimTemplates:- metadata:name: dataspec:accessModes: ["ReadWriteOnce"] # Works with EBS
Option 3: Use external shared storage
For things like uploads or session storage:
# Use S3, Redis, or external NFS instead of volumes
Quick Check
# See all StorageClasses and their capabilitieskubectl get sc -o custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner# Test if RWX workscat <<EOF | kubectl apply -f -apiVersion: v1kind: PersistentVolumeClaimmetadata:name: rwx-testspec:accessModes:- ReadWriteManystorageClassName: standardresources:requests:storage: 1GiEOF# Check if it bindskubectl get pvc rwx-test# If Pending = RWX not supported
3. Permission Denied Inside Container
The Error
$ kubectl logs app-podError: EACCES: permission denied, open '/data/file.txt'
or
$ kubectl exec -it app-pod -- touch /data/testtouch: cannot touch '/data/test': Permission denied
The classic. Volume mounted successfully, but the app can’t write to it.
Why This Happens
Your container runs as UID 1000, but the volume is owned by UID 0 (root). Linux says “nope.”
Debug the Problem
# Check what user your container runs askubectl exec -it app-pod -- iduid=1000(appuser) gid=1000(appuser) groups=1000(appuser)# Check volume ownershipkubectl exec -it app-pod -- ls -la /datadrwxr-xr-x 2 root root 4096 Jan 10 10:00 /data# ↑ Owned by root (UID 0), your app is UID 1000 - problem!
The Fix: Use fsGroup
apiVersion: v1kind: Podmetadata:name: app-podspec:securityContext:fsGroup: 1000 # Make volume accessible to GID 1000containers:- name: appimage: myapp:latestsecurityContext:runAsUser: 1000runAsGroup: 1000volumeMounts:- name: datamountPath: /datavolumes:- name: datapersistentVolumeClaim:claimName: app-pvc
After setting fsGroup: 1000:
$ kubectl exec -it app-pod -- ls -la /datadrwxrwsr-x 2 root 1000 4096 Jan 10 10:00 /data# ↑ GID is now 1000, setgid bit set
Alternative: Run as Root (Not Recommended)
securityContext:runAsUser: 0 # Run as root - works but security risk
Alternative: Change Volume Ownership with Init Container
initContainers:- name: fix-permissionsimage: busyboxcommand: ['sh', '-c', 'chown -R 1000:1000 /data']volumeMounts:- name: datamountPath: /datasecurityContext:runAsUser: 0 # Init container runs as root to fix permissions
4. fsGroup Not Applied
The Problem
You set fsGroup: 1000 but it doesn’t work. Volume still owned by root.
securityContext:fsGroup: 1000 # You set this# But inside the pod:$ ls -la /datadrwxr-xr-x 2 root root 4096 Jan 10 10:00 /data # Still root!
Why fsGroup Doesn’t Always Work
1. Volume type doesn’t support fsGroup
Some volume types ignore fsGroup:
- HostPath volumes
- Local volumes
- Some CSI drivers
2. fsGroup set at wrong level
# Wrong - at container levelcontainers:- name: appsecurityContext:fsGroup: 1000 # This doesn't work here!# Right - at pod levelspec:securityContext:fsGroup: 1000 # Must be here!
3. Existing files already have wrong ownership
fsGroup only affects new files. Existing files keep their ownership.
How I Debug This
# Check if fsGroup is actually setkubectl get pod app-pod -o jsonpath='{.spec.securityContext.fsGroup}'# Should return: 1000# Check if CSI driver supports fsGroupkubectl get csidriver ebs.csi.aws.com -o yaml | grep -i fsgroup# Check actual volume permissionskubectl exec -it app-pod -- ls -la /data
The Fix
For new volumes:
apiVersion: v1kind: Podmetadata:name: app-podspec:securityContext:fsGroup: 1000 # At pod level!fsGroupChangePolicy: "OnRootMismatch" # Only fix root directorycontainers:- name: appimage: myapp:latestsecurityContext:runAsUser: 1000runAsGroup: 1000
For existing volumes with wrong permissions:
initContainers:- name: fix-ownershipimage: busyboxcommand:- sh- -c- |chown -R 1000:1000 /datachmod -R g+rw /datavolumeMounts:- name: datamountPath: /datasecurityContext:runAsUser: 0 # Needs root to chown
For volume types that don’t support fsGroup:
# Use init container (only option)initContainers:- name: permission-fiximage: busyboxcommand: ['sh', '-c', 'chmod 777 /data'] # Or more restrictivevolumeMounts:- name: datamountPath: /data
5. SecurityContext Misconfiguration
Common Mistakes I’ve Seen
Mistake 1: Wrong nesting level
# This doesn't workcontainers:- name: appimage: myapp:latestfsGroup: 1000 # Not a container-level setting!# Should be:spec:securityContext:fsGroup: 1000
Mistake 2: Conflicting settings
spec:securityContext:runAsNonRoot: truecontainers:- name: appsecurityContext:runAsUser: 0 # Conflicts! Can't be root with runAsNonRoot
Mistake 3: Missing runAsGroup
securityContext:runAsUser: 1000# Missing runAsGroup - defaults to 0 (root)!# Should be:runAsGroup: 1000
The Right Configuration
apiVersion: v1kind: Podmetadata:name: app-podspec:securityContext:# Pod-level securityfsGroup: 2000 # GID for volume ownershiprunAsNonRoot: true # Don't allow rootseccompProfile:type: RuntimeDefaultcontainers:- name: appimage: myapp:latestsecurityContext:# Container-level securityrunAsUser: 1000 # UID to run asrunAsGroup: 2000 # GID to run asallowPrivilegeEscalation: falsereadOnlyRootFilesystem: true # Good security practicecapabilities:drop:- ALLvolumeMounts:- name: datamountPath: /data
Debug SecurityContext Issues
# Check what user container actually runs askubectl exec -it app-pod -- id# Should show:uid=1000 gid=2000 groups=2000# Check pod security contextkubectl get pod app-pod -o yaml | grep -A 10 securityContext# Check if pod is rejected by policykubectl get events | grep -i "pod security"
6. Root User Cannot Write to Volume
The Confusing Error
$ kubectl exec -it app-pod -- whoamiroot$ kubectl exec -it app-pod -- touch /data/filetouch: cannot touch '/data/file': Permission denied# Wait, root can't write? What?
Even root gets kubernetes volume permission denied. This broke my brain the first time I saw it.
Why This Happens
Reason 1: ReadOnlyRootFilesystem
securityContext:readOnlyRootFilesystem: true # Root filesystem is read-only# This includes mounted volumes!
Reason 2: Volume mounted read-only
volumeMounts:- name: datamountPath: /datareadOnly: true # Explicitly read-only
Reason 3: PVC has ReadOnly access mode
apiVersion: v1kind: PersistentVolumeClaimmetadata:name: data-pvcspec:accessModes:- ReadOnlyMany # Read-only!
Reason 4: SELinux blocking (on OpenShift/RHEL)
# SELinux context mismatchls -laZ /datadrwxr-xr-x. root root system_u:object_r:container_file_t:s0 data
The Fix
Check readOnly settings:
# Check volumeMountkubectl get pod app-pod -o yaml | grep -A 5 volumeMounts# Check PVC access modekubectl get pvc data-pvc -o jsonpath='{.spec.accessModes}'
Fix volumeMount:
volumeMounts:- name: datamountPath: /datareadOnly: false # Or just omit this line
Fix readOnlyRootFilesystem:
securityContext:readOnlyRootFilesystem: true # Keep this# Mount writable volumes explicitlyvolumeMounts:- name: datamountPath: /data # This is writable- name: tmpmountPath: /tmp # Temporary storagevolumes:- name: tmpemptyDir: {} # Writable temp space
Fix PVC:
spec:accessModes:- ReadWriteOnce # Change from ReadOnlyMany
7. SELinux Volume Labeling Issues (OpenShift)
The OpenShift/RHEL Special
If you’re on OpenShift or RHEL with SELinux enabled, you might see:
$ kubectl logs app-podError: EACCES: permission denied, open '/data/config.json'$ kubectl exec -it app-pod -- ls -laZ /datadrwxrwxr-x. root root system_u:object_r:container_file_t:s0 data# That SELinux context might be wrong
Why SELinux Complicates Things
SELinux adds another layer of access control beyond standard Unix permissions. Even if UID/GID are correct, SELinux context must also match.
Check if SELinux is the Problem
# Check if SELinux is enabledkubectl exec -it app-pod -- getenforce# If returns "Enforcing", SELinux is active# Check SELinux denialskubectl exec -it app-pod -- cat /var/log/audit/audit.log | grep denied# On the node (if you have access)sudo ausearch -m avc -ts recent
The Fix: Set SELinux Options
apiVersion: v1kind: Podmetadata:name: app-podspec:securityContext:seLinuxOptions:level: "s0:c123,c456" # Set SELinux levelcontainers:- name: appimage: myapp:latestvolumeMounts:- name: datamountPath: /datavolumes:- name: datapersistentVolumeClaim:claimName: app-pvc
Alternative: Use Privileged Container (Not Recommended)
securityContext:privileged: true # Disables SELinux restrictions# Only use for debugging!
Alternative: Set SELinux Label on PV
apiVersion: v1kind: PersistentVolumemetadata:name: pv-dataspec:capacity:storage: 10GiaccessModes:- ReadWriteOncemountOptions:- context="system_u:object_r:container_file_t:s0" # Set SELinux contextnfs:server: nfs-server.example.compath: /exports/data
OpenShift Specific
OpenShift usually handles this automatically with SCCs (Security Context Constraints):
# Check SCCoc describe scc restricted# If issues persist, might need different SCCoc adm policy add-scc-to-user anyuid -z default
Quick Troubleshooting Checklist
When you get permission denied:
# 1. Check what user the container runs askubectl exec -it <pod> -- id# 2. Check volume ownershipkubectl exec -it <pod> -- ls -la /mount/path# 3. Check if fsGroup is setkubectl get pod <pod> -o jsonpath='{.spec.securityContext.fsGroup}'# 4. Check access modekubectl get pvc <pvc> -o jsonpath='{.spec.accessModes}'# 5. Check for read-only settingskubectl get pod <pod> -o yaml | grep -i readonly# 6. Check for SELinux issues (RHEL/OpenShift)kubectl exec -it <pod> -- ls -laZ /mount/path# 7. Try as root (debugging only)kubectl exec -it <pod> -- su - root -c 'touch /mount/path/test'
The Universal Fix (That Actually Works)
When nothing else works, this init container fixes 90% of permission issues:
apiVersion: v1kind: Podmetadata:name: app-podspec:initContainers:- name: volume-permissionsimage: busyboxcommand:- sh- -c- |echo "Fixing volume permissions..."chown -R 1000:1000 /datachmod -R 775 /dataecho "Done!"volumeMounts:- name: datamountPath: /datasecurityContext:runAsUser: 0 # Init container runs as rootcontainers:- name: appimage: myapp:latestsecurityContext:runAsUser: 1000runAsGroup: 1000volumeMounts:- name: datamountPath: /datavolumes:- name: datapersistentVolumeClaim:claimName: app-pvc
This init container:
- Runs as root (can change any permissions)
- Fixes ownership to match your app’s UID/GID
- Runs before your app starts
- Only runs once per pod start
I use this pattern in almost every stateful application I deploy.
Best Practices I Follow Now
After dealing with permission issues for years:
1. Always Set Both UID and GID
securityContext:runAsUser: 1000runAsGroup: 1000 # Don't forget this!fsGroup: 1000
2. Match UIDs Between Dockerfile and SecurityContext
# In DockerfileUSER 1000:1000# In pod specsecurityContext:runAsUser: 1000runAsGroup: 1000
3. Use Init Containers for Permission Fixes
Don’t try to fix permissions in the main container. Use an init container that runs as root.
4. Test with a Simple Pod First
apiVersion: v1kind: Podmetadata:name: permission-testspec:securityContext:fsGroup: 1000containers:- name: testimage: busyboxcommand: ['sh', '-c', 'while true; do sleep 30; done']securityContext:runAsUser: 1000runAsGroup: 1000volumeMounts:- name: datamountPath: /datavolumes:- name: datapersistentVolumeClaim:claimName: test-pvc
5. Document Your UID/GID Choices
# Comment in your manifestssecurityContext:runAsUser: 1000 # Matches 'appuser' in DockerfilefsGroup: 1000 # Required for volume access
Common Patterns That Work
Pattern 1: Standard Web Application
apiVersion: apps/v1kind: Deploymentmetadata:name: webappspec:template:spec:securityContext:fsGroup: 2000runAsNonRoot: trueinitContainers:- name: init-permissionsimage: busyboxcommand: ['sh', '-c', 'chmod 775 /data && chown 1000:2000 /data']volumeMounts:- name: datamountPath: /datasecurityContext:runAsUser: 0containers:- name: webimage: webapp:latestsecurityContext:runAsUser: 1000runAsGroup: 2000allowPrivilegeEscalation: falsevolumeMounts:- name: datamountPath: /app/datavolumes:- name: datapersistentVolumeClaim:claimName: webapp-pvc
Pattern 2: Database with StatefulSet
apiVersion: apps/v1kind: StatefulSetmetadata:name: postgresspec:serviceName: postgrestemplate:spec:securityContext:fsGroup: 999 # postgres groupcontainers:- name: postgresimage: postgres:15securityContext:runAsUser: 999 # postgres userrunAsGroup: 999volumeMounts:- name: datamountPath: /var/lib/postgresql/datavolumeClaimTemplates:- metadata:name: dataspec:accessModes: ["ReadWriteOnce"]resources:requests:storage: 10Gi
Pattern 3: Shared Storage for Multiple Pods
apiVersion: v1kind: PersistentVolumeClaimmetadata:name: shared-uploadsspec:accessModes:- ReadWriteMany # Must use RWX-capable storagestorageClassName: efs-scresources:requests:storage: 100Gi---apiVersion: apps/v1kind: Deploymentmetadata:name: appspec:replicas: 5template:spec:securityContext:fsGroup: 1000containers:- name: appimage: myapp:latestsecurityContext:runAsUser: 1000runAsGroup: 1000volumeMounts:- name: uploadsmountPath: /app/uploadsvolumes:- name: uploadspersistentVolumeClaim:claimName: shared-uploads
Final Thoughts
Kubernetes Volume permission denied in Kubernetes are deceptively complex. You’d think “mount a volume, write to it” would be simple, but:
- Access modes limit where volumes can be used
- Container users must match volume ownership
- fsGroup only works with certain volume types
- SELinux adds another layer on RHEL/OpenShift
- Cloud providers have their own quirks
After years of fighting these issues, here’s what I’ve learned:
Always set:
runAsUserandrunAsGroupin container securityContextfsGroupin pod securityContext- Use init containers to fix existing permissions
Always check:
- What UID your container runs as (
idcommand) - What owns the volume (
ls -la /path) - Whether they match
When in doubt:
- Start with RWO unless you truly need shared storage
- Use StatefulSet for per-pod storage
- Test with a simple busybox pod first
The 11 PM production incident taught me to test permissions in dev. Now I always create a test pod with the same securityContext before deploying. Saves hours of debugging and prevents those embarrassing midnight Slack notifications about kubernetes volume permission denied.
Hit a kubernetes volume permission denied error I didn’t cover? Drop it in the comments – I’m always learning new ways storage can break.
Additional Resources
Keywords: kubernetes volume permission denied, readwriteonce multiple nodes, readwritemany not supported, fsgroup not working kubernetes, kubernetes security context, selinux kubernetes volumes, kubernetes permission denied root, kubernetes volume access modes, kubernetes storage permissions, kubernetes volume permission denied error