Last Updated: January 2026
I remember the day I spent four hours debugging why a simple PVC wouldn’t bind. The YAML looked perfect. The cluster was healthy. Everything should have worked.
The error? storageclass.storage.k8s.io "standard" not found
Turns out, I was in a fresh cluster and nobody had created a StorageClass yet. The PVC was asking for “standard” but there was no StorageClass with that name. Zero. Nada. Nothing.
Once I created the StorageClass, everything worked instantly. Four hours for a 30-second fix.
Kubernetes StorageClass errors are sneaky because they make everything else look broken. Your PVCs sit in Pending. Your pods won’t start. Your deployment looks fine but nothing actually runs. And the error messages? They’re not always helpful.
If your persistent volumes aren’t provisioning, chances are it’s a StorageClass issue. Let me show you the seven most common StorageClass errors and how to fix them fast.
Understanding StorageClasses (The 2-Minute Version)
Before we fix errors, here’s what StorageClasses actually do:
StorageClass = Template for creating volumes
When you create a PVC, it asks for a StorageClass. The StorageClass tells Kubernetes:
- Which provisioner to use (EBS, GCE PD, NFS, etc.)
- What parameters to use (volume type, IOPS, encryption)
- When to create the volume (immediately or wait for pod)
Think of it like ordering pizza:
- PVC = “I want a large pizza”
- StorageClass = “Large means 14 inches, thin crust, from Domino’s”
- Provisioner = “Domino’s” (does the actual work)
When any part of this chain breaks, no pizza (volume) for you.
Kubernetes Storage Errors
1. StorageClass Not Found
The Error
$ kubectl get pvcNAME STATUS VOLUME CAPACITY STORAGECLASS AGEmy-pvc Pending standard 5m$ kubectl describe pvc my-pvcEvents:Warning ProvisioningFailed storageclass.storage.k8s.io "standard" not found
This is the error that cost me four hours. Your PVC references a StorageClass that simply doesn’t exist.
Why This Happens
Scenario 1: Fresh cluster, no StorageClasses created yet
$ kubectl get storageclassNo resources found# Yep, that'll do it
Scenario 2: Typo in PVC
# PVC asks for "standrad"apiVersion: v1kind: PersistentVolumeClaimmetadata:name: my-pvcspec:accessModes:- ReadWriteOncestorageClassName: standrad # Typo! Should be "standard"resources:requests:storage: 10Gi
Scenario 3: Copied YAML from different cluster
You copied a PVC from production that uses “gp3-encrypted” but your dev cluster only has “gp2”.
How to Fix It
Step 1: Check what StorageClasses actually exist
kubectl get storageclass# or shorter:kubectl get sc
Step 2: Create the missing StorageClass
For AWS EBS:
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: standardprovisioner: ebs.csi.aws.comparameters:type: gp3encrypted: "true"volumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true
For GCP Persistent Disk:
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: standardprovisioner: pd.csi.storage.gke.ioparameters:type: pd-standardvolumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true
For Azure Disk:
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: standardprovisioner: disk.csi.azure.comparameters:skuName: Standard_LRSvolumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true
Step 3: Apply it
kubectl apply -f storageclass.yaml# Verify it existskubectl get sc standard
Step 4: Your existing PVC should now provision
kubectl get pvc my-pvc# Should change from Pending to Bound within a minute
Pro Tip
If you don’t want to specify storageClassName in every PVC, set a default (see next section).
2. Default StorageClass Missing
The Problem
# PVC with no storageClassName specifiedapiVersion: v1kind: PersistentVolumeClaimmetadata:name: my-pvcspec:accessModes:- ReadWriteOnce# No storageClassName specified!resources:requests:storage: 5Gi$ kubectl get pvc my-pvcNAME STATUS VOLUME STORAGECLASS AGEmy-pvc Pending <none> 10m
When you don’t specify a StorageClass, Kubernetes looks for one marked as default. If there isn’t one, the PVC stays Pending forever.
Check for Default StorageClass
$ kubectl get storageclassNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODEgp2 ebs.csi.aws.com Delete Immediategp3 ebs.csi.aws.com Delete WaitForFirstConsumer# No "(default)" annotation = no default
The Fix: Set a Default
# Mark a StorageClass as defaultkubectl patch storageclass gp3 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'# Verify it worked$ kubectl get scNAME PROVISIONER RECLAIMPOLICYgp2 ebs.csi.aws.com Deletegp3 (default) ebs.csi.aws.com Delete# ↑ This means it's default
Or create a new default StorageClass:
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: standardannotations:storageclass.kubernetes.io/is-default-class: "true" # This line!provisioner: ebs.csi.aws.comparameters:type: gp3encrypted: "true"volumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true
Important Notes
- You should have exactly ONE default StorageClass
- If you have zero, PVCs without storageClassName stay Pending
- If you have multiple, weird things happen (see next section)
My Standard Practice
I always create a default StorageClass in every cluster:
# In my cluster setup scriptskubectl apply -f storageclass-default.yaml# Verify default existskubectl get sc | grep default
This way, developers don’t need to remember to specify storageClassName in every PVC.
3. Multiple Default StorageClasses
The Chaos
$ kubectl get storageclassNAME PROVISIONER RECLAIMPOLICYgp2 (default) ebs.csi.aws.com Deletegp3 (default) ebs.csi.aws.com Deletestandard (default) ebs.csi.aws.com Delete
Three defaults! Which one will Kubernetes use?
Answer: Whichever one it feels like. The behavior is undefined.
This happened to me after importing StorageClass definitions from multiple sources. Each one was marked as default, and I didn’t notice until PVCs started binding to random StorageClasses.
Why This Is Bad
# Developer creates PVCapiVersion: v1kind: PersistentVolumeClaimmetadata:name: database-pvcspec:accessModes:- ReadWriteOnceresources:requests:storage: 100Gi# No storageClassName - expects default
Sometimes it uses gp2 (slow, cheap). Sometimes gp3 (fast, expensive). Your database performance is now random. Not good.
How to Fix It
Find all defaults:
kubectl get storageclass -o json | jq -r '.items[] | select(.metadata.annotations["storageclass.kubernetes.io/is-default-class"]=="true") | .metadata.name'
Remove default annotation from unwanted ones:
# Keep gp3 as default, remove from otherskubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'# Verify only one default remains$ kubectl get scNAME PROVISIONER RECLAIMPOLICYgp2 ebs.csi.aws.com Deletegp3 (default) ebs.csi.aws.com Deletestandard ebs.csi.aws.com Delete
Prevention
In your cluster setup documentation:
## Default StorageClass Policy- Only ONE default StorageClass allowed- Default should be: gp3 (balanced performance/cost)- Check before adding new StorageClasses: kubectl get sc
I add this to every cluster’s README after learning this lesson the hard way.
4. Invalid Provisioner
The Error
$ kubectl describe pvc my-pvcEvents:Warning ProvisioningFailed failed to provision volume:failed to find provisioner "kubernetes.io/aws-ebs"
Your StorageClass references a provisioner that doesn’t exist or isn’t running.
Common Invalid Provisioners
Old in-tree provisioner names (deprecated):
# These don't work anymore in modern Kubernetesprovisioner: kubernetes.io/aws-ebs # Old!provisioner: kubernetes.io/gce-pd # Old!provisioner: kubernetes.io/azure-disk # Old!# Should be CSI drivers now:provisioner: ebs.csi.aws.com # Newprovisioner: pd.csi.storage.gke.io # Newprovisioner: disk.csi.azure.com # New
Typos:
provisioner: ebs.csi.aws.comm # Extra 'm'provisioner: ebs-csi-aws-com # Wrong separators
Provisioner not installed:
provisioner: nfs.csi.k8s.io# But the NFS CSI driver isn't installed in your cluster
How to Fix It
Step 1: Check what provisioners are available
# See installed CSI driverskubectl get csidriversNAME ATTACHREQUIRED PODINFOONMOUNTebs.csi.aws.com true false# This is your valid provisioner name
Step 2: Update StorageClass with correct provisioner
kubectl edit storageclass my-sc# Change provisioner to match CSI driver name exactly# From: provisioner: kubernetes.io/aws-ebs# To: provisioner: ebs.csi.aws.com
Step 3: If CSI driver doesn’t exist, install it
For AWS EBS:
kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.25"
For GCP PD:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/master/deploy/kubernetes/deploy-driver.yaml
For NFS:
helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/chartshelm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system
My Cheat Sheet
I keep this in every cluster’s docs:
## Valid Provisioners in This ClusterAWS EBS: ebs.csi.aws.comAWS EFS: efs.csi.aws.comNFS: nfs.csi.k8s.ioCheck: kubectl get csidrivers
5. Incorrect Parameters in StorageClass
The Error
$ kubectl describe pvc my-pvcEvents:Warning ProvisioningFailed failed to provision volume:rpc error: code = InvalidArgument desc = Invalid parameter "type": "gp7"
Your StorageClass has invalid parameters. The provisioner doesn’t understand what you’re asking for.
Common Parameter Mistakes
Invalid volume type:
parameters:type: gp7 # Doesn't exist! Valid: gp2, gp3, io1, io2, st1, sc1
IOPS too high for volume type:
parameters:type: gp3iops: "20000" # gp3 max is 16,000!
Missing required parameters:
# io2 volumes REQUIRE iops parameterparameters:type: io2# Missing: iops: "10000"
Wrong parameter names:
parameters:volumeType: gp3 # Wrong! Should be "type"encryption: "true" # Wrong! Should be "encrypted"
Cloud Provider Specific Issues
AWS EBS:
# Common mistakesparameters:type: gp3iops: "3000" # Must be string, not numberthroughput: "125" # Must be stringencrypted: "true" # Must be string "true", not booleankmsKeyId: "arn:..." # Full ARN required
GCP Persistent Disk:
parameters:type: pd-standard # Valid: pd-standard, pd-balanced, pd-ssdreplication-type: none # Valid: none, regional-pd
Azure Disk:
parameters:skuName: Standard_LRS # Valid: Standard_LRS, Premium_LRS, StandardSSD_LRSlocation: eastus # Must match cluster region
How to Fix It
Step 1: Check CSI driver logs for exact error
kubectl logs -n kube-system -l app=ebs-csi-controller -c csi-provisioner --tail=50
Step 2: Look up valid parameters
Check the CSI driver documentation:
- AWS EBS CSI Parameters
- GCP PD CSI Parameters
- Azure Disk CSI Parameters
Step 3: Fix the StorageClass
kubectl edit storageclass my-sc# Correct the parameters based on documentation
Step 4: Test with a new PVC
apiVersion: v1kind: PersistentVolumeClaimmetadata:name: test-pvcspec:accessModes:- ReadWriteOncestorageClassName: my-sc # Your fixed StorageClassresources:requests:storage: 1Gi # Small for testing
Working Examples
AWS gp3 with encryption:
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: gp3-encryptedprovisioner: ebs.csi.aws.comparameters:type: gp3iops: "3000"throughput: "125"encrypted: "true"volumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true
AWS io2 high-performance:
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: io2-fastprovisioner: ebs.csi.aws.comparameters:type: io2iops: "10000"encrypted: "true"volumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true
GCP regional persistent disk:
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: gcp-regionalprovisioner: pd.csi.storage.gke.ioparameters:type: pd-standardreplication-type: regional-pdvolumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true
6. VolumeBindingMode Misconfigured
The Problem
There are two VolumeBindingModes:
- Immediate – Create volume as soon as PVC is created
- WaitForFirstConsumer – Wait until a pod uses the PVC
Using the wrong one causes problems.
Issue 1: Zone Mismatch with Immediate Mode
This bit me hard in production.
# StorageClass with Immediate bindingapiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: fast-storageprovisioner: ebs.csi.aws.comvolumeBindingMode: Immediate # Creates volume immediatelyparameters:type: io2iops: "10000"
What happens:
- You create a PVC
- Volume is created in random AZ (say, us-east-1a)
- Your pod gets scheduled in us-east-1b
- Volume can’t attach (wrong zone!)
- Pod stuck in ContainerCreating
$ kubectl describe pod my-podEvents:Warning FailedAttachVolume AttachVolume.Attach failed:volume is in us-east-1a but pod is scheduled in us-east-1b
The Fix: Use WaitForFirstConsumer
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: fast-storageprovisioner: ebs.csi.aws.comvolumeBindingMode: WaitForFirstConsumer # Wait for pod!parameters:type: io2iops: "10000"
With WaitForFirstConsumer:
- You create a PVC (stays Pending)
- You create a pod that uses the PVC
- Pod gets scheduled to a node (say, us-east-1a)
- Volume is created in the SAME zone (us-east-1a)
- Everything works!
Issue 2: Testing PVCs with WaitForFirstConsumer
# You create a PVC to testkubectl apply -f pvc.yaml$ kubectl get pvcNAME STATUS VOLUME CAPACITY STORAGECLASStest-pvc Pending gp3# Stays Pending forever - is it broken?
It’s not broken! It’s waiting for a pod. This confused me the first time I used WaitForFirstConsumer.
Test it properly:
# Create a test podapiVersion: v1kind: Podmetadata:name: test-podspec:containers:- name: testimage: busyboxcommand: ['sleep', '3600']volumeMounts:- name: datamountPath: /datavolumes:- name: datapersistentVolumeClaim:claimName: test-pvc # References your PVC
Now the PVC will bind:
$ kubectl get pvcNAME STATUS VOLUME CAPACITY STORAGECLASStest-pvc Bound pvc-abc123-def456 10Gi gp3
When to Use Which Mode
Use WaitForFirstConsumer (recommended for cloud):
- Multi-zone clusters
- Cloud storage (EBS, GCE PD, Azure Disk)
- When you want volume in same zone as pod
Use Immediate:
- Single-zone clusters
- Testing/development
- When you need to pre-provision volumes
- NFS or storage that’s zone-agnostic
My Default StorageClass Template
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: standardannotations:storageclass.kubernetes.io/is-default-class: "true"provisioner: ebs.csi.aws.comparameters:type: gp3encrypted: "true"volumeBindingMode: WaitForFirstConsumer # Always use this for cloudallowVolumeExpansion: truereclaimPolicy: Delete
I use this template in every cloud-based cluster. Saves so much debugging.
7. Immediate vs WaitForFirstConsumer Deep Dive
Let me show you the exact difference with a real scenario.
Scenario: Three-Node Cluster in Different Zones
Node 1: us-east-1aNode 2: us-east-1bNode 3: us-east-1c
With Immediate Mode
volumeBindingMode: Immediate
Timeline:
T+0s: Create PVCT+1s: Volume created in us-east-1a (random zone)T+2s: PVC shows BoundT+30s: Create pod using this PVCT+31s: Scheduler puts pod on Node 2 (us-east-1b)T+32s: Try to attach volume from us-east-1a to node in us-east-1bT+33s: FAIL - "volume is in different zone"
Result: Pod stuck forever. Volume in wrong zone.
With WaitForFirstConsumer
volumeBindingMode: WaitForFirstConsumer
Timeline:
T+0s: Create PVCT+1s: PVC stays Pending (waiting)T+30s: Create pod using this PVCT+31s: Scheduler puts pod on Node 2 (us-east-1b)T+32s: "Ah, pod is in us-east-1b, create volume there"T+35s: Volume created in us-east-1b (same zone as pod!)T+36s: Volume attaches successfullyT+37s: Pod starts running
Result: Everything works!
Visual Comparison
Immediate Mode:
PVC Created → Volume Created (random zone) → Pod Scheduled (might be different zone) → Problem!
WaitForFirstConsumer:
PVC Created → Pod Scheduled → Volume Created in same zone → Success!
The One Exception
Sometimes you want Immediate mode:
# Pre-provisioning volumes for a specific nodeapiVersion: v1kind: PersistentVolumemetadata:name: manual-pvspec:capacity:storage: 100GiaccessModes:- ReadWriteOncenodeAffinity:required:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/hostnameoperator: Invalues:- specific-node-namestorageClassName: local-storagelocal:path: /mnt/disks/ssd1
But 99% of the time, use WaitForFirstConsumer.
Quick Troubleshooting Checklist
When PVCs won’t provision:
# 1. Does the StorageClass exist?kubectl get storageclass# 2. Is there a default if PVC doesn't specify one?kubectl get sc | grep default# 3. Check for multiple defaultskubectl get sc -o json | jq -r '.items[] | select(.metadata.annotations["storageclass.kubernetes.io/is-default-class"]=="true") | .metadata.name'# 4. Is the provisioner valid?kubectl get csidrivers# 5. Check StorageClass parameterskubectl get sc <name> -o yaml# 6. What's the VolumeBindingMode?kubectl get sc <name> -o jsonpath='{.volumeBindingMode}'# 7. Check PVC eventskubectl describe pvc <pvc-name># 8. Check CSI driver logskubectl logs -n kube-system -l app=ebs-csi-controller -c csi-provisioner
My StorageClass Setup Checklist
When setting up a new cluster:
1. Create standard StorageClass (fast):
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: fastprovisioner: ebs.csi.aws.comparameters:type: gp3iops: "3000"throughput: "125"encrypted: "true"volumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true
2. Create default StorageClass (balanced):
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: standardannotations:storageclass.kubernetes.io/is-default-class: "true"provisioner: ebs.csi.aws.comparameters:type: gp3encrypted: "true"volumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true
3. Create slow/cheap StorageClass (archives):
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: slowprovisioner: ebs.csi.aws.comparameters:type: sc1 # Cold HDD, cheapestencrypted: "true"volumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true
4. Verify setup:
kubectl get sc# Should see: fast, standard (default), slowkubectl get csidrivers# Should see: ebs.csi.aws.com
5. Test:
# Create test PVCkubectl apply -f - <<EOFapiVersion: v1kind: PersistentVolumeClaimmetadata:name: test-pvcspec:accessModes:- ReadWriteOnceresources:requests:storage: 1GiEOF# Create test podkubectl run test --image=busybox --restart=Never -- sleep 3600kubectl set volume pod/test --add --name=data --type=persistentVolumeClaim --claim-name=test-pvc --mount-path=/data# Check it bindskubectl get pvc test-pvc# Should show Bound within 30 seconds# Cleanupkubectl delete pod testkubectl delete pvc test-pvc
Common Mistakes to Avoid
Mistake 1: Not Using allowVolumeExpansion
# BadapiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: standardprovisioner: ebs.csi.aws.com# Missing: allowVolumeExpansion: true
Without this, you can’t resize volumes later. Always set it to true.
Mistake 2: Using Immediate in Multi-Zone Clusters
# Bad for multi-zonevolumeBindingMode: Immediate
Use WaitForFirstConsumer unless you have a specific reason not to.
Mistake 3: Forgetting to Set Default
# No default set$ kubectl get scNAME PROVISIONER RECLAIMPOLICYgp2 ebs.csi.aws.com Deletegp3 ebs.csi.aws.com Delete
Always have one default StorageClass.
Mistake 4: Wrong Reclaim Policy
# Dangerous for production!reclaimPolicy: Delete # Deletes data when PVC is deleted
For important data, consider using Retain:
reclaimPolicy: Retain # Keeps data even if PVC is deleted
Mistake 5: Not Testing StorageClass Before Using
Always test new StorageClasses with a simple PVC before using them in production.
Best Practices I Follow
1. Document Your StorageClasses
Keep a README in your cluster:
## StorageClasses in This Cluster### standard (default)- Type: gp3- IOPS: 3000- Encrypted: Yes- Use for: General purpose, most workloads- Cost: ~$0.08/GB/month### fast- Type: io2- IOPS: 10000- Encrypted: Yes- Use for: Databases, high-performance apps- Cost: ~$0.125/GB/month + IOPS charges### slow- Type: sc1- Use for: Backups, archives, infrequently accessed data- Cost: ~$0.045/GB/month
2. Use Consistent Naming
# Cloud provider prefix + typegp3-encrypted # AWS gp3 with encryptionpd-ssd # GCP persistent disk SSDazuredisk-premium # Azure premium disk
3. Always Set These Fields
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: my-scprovisioner: ebs.csi.aws.com # RequiredvolumeBindingMode: WaitForFirstConsumer # Set explicitlyallowVolumeExpansion: true # Always truereclaimPolicy: Delete # Set explicitlyparameters:encrypted: "true" # Always encrypt
4. Monitor StorageClass Usage
# See which StorageClasses are actually usedkubectl get pvc --all-namespaces -o custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,STORAGECLASS:.spec.storageClassName,SIZE:.spec.resources.requests.storage# Count PVCs per StorageClasskubectl get pvc --all-namespaces -o json | jq -r '.items[] | .spec.storageClassName' | sort | uniq -c
Real-World Examples
Example 1: Web Application Stack
# Fast storage for databaseapiVersion: v1kind: PersistentVolumeClaimmetadata:name: postgres-dataspec:accessModes:- ReadWriteOncestorageClassName: fast # io2 with high IOPSresources:requests:storage: 100Gi---# Standard storage for application logsapiVersion: v1kind: PersistentVolumeClaimmetadata:name: app-logsspec:accessModes:- ReadWriteOncestorageClassName: standard # gp3resources:requests:storage: 20Gi---# Slow storage for backupsapiVersion: v1kind: PersistentVolumeClaimmetadata:name: backupsspec:accessModes:- ReadWriteOncestorageClassName: slow # sc1 cold storageresources:requests:storage: 500Gi
Example 2: Multi-Tenant Setup
# Gold tier (high performance)apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: goldlabels:tier: premiumprovisioner: ebs.csi.aws.comparameters:type: io2iops: "16000"encrypted: "true"volumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true---# Silver tier (balanced)apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: silverlabels:tier: standardannotations:storageclass.kubernetes.io/is-default-class: "true"provisioner: ebs.csi.aws.comparameters:type: gp3iops: "3000"encrypted: "true"volumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true---# Bronze tier (economy)apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: bronzelabels:tier: economyprovisioner: ebs.csi.aws.comparameters:type: gp2encrypted: "true"volumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true
Final Thoughts
After years of fighting StorageClass issues, here’s what I’ve learned:
Most StorageClass problems come down to:
- StorageClass doesn’t exist (check with
kubectl get sc) - No default set (set one!)
- Wrong provisioner name (check
kubectl get csidrivers) - Invalid parameters (check CSI driver docs)
- Using Immediate instead of WaitForFirstConsumer (use Wait!)
My debugging process:
- Does StorageClass exist? (
kubectl get sc) - Is provisioner valid? (
kubectl get csidrivers) - What do the parameters say? (
kubectl get sc <n> -o yaml) - What’s the VolumeBindingMode? (should be WaitForFirstConsumer)
- Check CSI driver logs for details
Prevention checklist:
- Always have exactly one default StorageClass
- Use
WaitForFirstConsumerfor cloud storage - Set
allowVolumeExpansion: true - Always encrypt volumes (
encrypted: "true") - Document your StorageClasses
- Test before production use
The four hours I spent debugging that first “StorageClass not found” error taught me to always verify StorageClasses exist before deploying anything. Now it’s the first thing I check in a new cluster.
Remember: StorageClass issues aren’t always obvious. A PVC sitting in Pending state could be because there’s no StorageClass, the wrong provisioner, invalid parameters, or a dozen other reasons. Start with the basics and work your way through systematically.
Have a StorageClass error that stumped you? Share it in the comments – I’m always learning new ways these can break!
Keywords: kubernetes storageclass errors, storageclass not found, default storageclass missing, multiple default storageclasses, invalid provisioner kubernetes, volumebindingmode kubernetes, waitforfirstconsumer vs immediate, kubernetes storage provisioning, storageclass parameters, kubernetes persistent storage