7 Hidden Kubernetes StorageClass Errors That Keep PVCs Pending

Last Updated: January 2026

I remember the day I spent four hours debugging why a simple PVC wouldn’t bind. The YAML looked perfect. The cluster was healthy. Everything should have worked.

The error? storageclass.storage.k8s.io "standard" not found

Turns out, I was in a fresh cluster and nobody had created a StorageClass yet. The PVC was asking for “standard” but there was no StorageClass with that name. Zero. Nada. Nothing.

Once I created the StorageClass, everything worked instantly. Four hours for a 30-second fix.

Kubernetes StorageClass errors are sneaky because they make everything else look broken. Your PVCs sit in Pending. Your pods won’t start. Your deployment looks fine but nothing actually runs. And the error messages? They’re not always helpful.

If your persistent volumes aren’t provisioning, chances are it’s a StorageClass issue. Let me show you the seven most common StorageClass errors and how to fix them fast.

Understanding StorageClasses (The 2-Minute Version)

kubernetes storageclass errors

Before we fix errors, here’s what StorageClasses actually do:

StorageClass = Template for creating volumes

When you create a PVC, it asks for a StorageClass. The StorageClass tells Kubernetes:

  • Which provisioner to use (EBS, GCE PD, NFS, etc.)
  • What parameters to use (volume type, IOPS, encryption)
  • When to create the volume (immediately or wait for pod)

Think of it like ordering pizza:

  • PVC = “I want a large pizza”
  • StorageClass = “Large means 14 inches, thin crust, from Domino’s”
  • Provisioner = “Domino’s” (does the actual work)

When any part of this chain breaks, no pizza (volume) for you.

Kubernetes Storage Errors

1. StorageClass Not Found

The Error

$ kubectl get pvc
NAME        STATUS    VOLUME   CAPACITY   STORAGECLASS   AGE
my-pvc      Pending                       standard       5m

$ kubectl describe pvc my-pvc
Events:
  Warning  ProvisioningFailed  storageclass.storage.k8s.io "standard" not found

This is the error that cost me four hours. Your PVC references a StorageClass that simply doesn’t exist.

Why This Happens

Scenario 1: Fresh cluster, no StorageClasses created yet

$ kubectl get storageclass
No resources found
# Yep, that'll do it

Scenario 2: Typo in PVC

# PVC asks for "standrad"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: standrad  # Typo! Should be "standard"
  resources:
    requests:
      storage: 10Gi

Scenario 3: Copied YAML from different cluster

You copied a PVC from production that uses “gp3-encrypted” but your dev cluster only has “gp2”.

How to Fix It

Step 1: Check what StorageClasses actually exist

kubectl get storageclass
# or shorter:
kubectl get sc

Step 2: Create the missing StorageClass

For AWS EBS:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
  encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

For GCP Persistent Disk:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: pd.csi.storage.gke.io
parameters:
  type: pd-standard
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

For Azure Disk:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: disk.csi.azure.com
parameters:
  skuName: Standard_LRS
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

Step 3: Apply it

kubectl apply -f storageclass.yaml

# Verify it exists
kubectl get sc standard

Step 4: Your existing PVC should now provision

kubectl get pvc my-pvc
# Should change from Pending to Bound within a minute

Pro Tip

If you don’t want to specify storageClassName in every PVC, set a default (see next section).

2. Default StorageClass Missing

The Problem

# PVC with no storageClassName specified
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  # No storageClassName specified!
  resources:
    requests:
      storage: 5Gi
$ kubectl get pvc my-pvc
NAME     STATUS    VOLUME   STORAGECLASS   AGE
my-pvc   Pending            <none>         10m

When you don’t specify a StorageClass, Kubernetes looks for one marked as default. If there isn’t one, the PVC stays Pending forever.

Check for Default StorageClass

$ kubectl get storageclass
NAME       PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE
gp2        ebs.csi.aws.com        Delete          Immediate
gp3        ebs.csi.aws.com        Delete          WaitForFirstConsumer

# No "(default)" annotation = no default

The Fix: Set a Default

# Mark a StorageClass as default
kubectl patch storageclass gp3 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

# Verify it worked
$ kubectl get sc
NAME              PROVISIONER         RECLAIMPOLICY
gp2               ebs.csi.aws.com     Delete
gp3 (default)     ebs.csi.aws.com     Delete
#   ↑ This means it's default

Or create a new default StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"  # This line!
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
  encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

Important Notes

  • You should have exactly ONE default StorageClass
  • If you have zero, PVCs without storageClassName stay Pending
  • If you have multiple, weird things happen (see next section)

My Standard Practice

I always create a default StorageClass in every cluster:

# In my cluster setup scripts
kubectl apply -f storageclass-default.yaml

# Verify default exists
kubectl get sc | grep default

This way, developers don’t need to remember to specify storageClassName in every PVC.

3. Multiple Default StorageClasses

The Chaos

$ kubectl get storageclass
NAME                  PROVISIONER         RECLAIMPOLICY
gp2 (default)         ebs.csi.aws.com     Delete
gp3 (default)         ebs.csi.aws.com     Delete
standard (default)    ebs.csi.aws.com     Delete

Three defaults! Which one will Kubernetes use?

Answer: Whichever one it feels like. The behavior is undefined.

This happened to me after importing StorageClass definitions from multiple sources. Each one was marked as default, and I didn’t notice until PVCs started binding to random StorageClasses.

Why This Is Bad

# Developer creates PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: database-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
  # No storageClassName - expects default

Sometimes it uses gp2 (slow, cheap). Sometimes gp3 (fast, expensive). Your database performance is now random. Not good.

How to Fix It

Find all defaults:

kubectl get storageclass -o json | jq -r '.items[] | select(.metadata.annotations["storageclass.kubernetes.io/is-default-class"]=="true") | .metadata.name'

Remove default annotation from unwanted ones:

# Keep gp3 as default, remove from others
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

# Verify only one default remains
$ kubectl get sc
NAME              PROVISIONER         RECLAIMPOLICY
gp2               ebs.csi.aws.com     Delete
gp3 (default)     ebs.csi.aws.com     Delete
standard          ebs.csi.aws.com     Delete

Prevention

In your cluster setup documentation:

## Default StorageClass Policy
- Only ONE default StorageClass allowed
- Default should be: gp3 (balanced performance/cost)
- Check before adding new StorageClasses: kubectl get sc

I add this to every cluster’s README after learning this lesson the hard way.

4. Invalid Provisioner

The Error

$ kubectl describe pvc my-pvc
Events:
  Warning  ProvisioningFailed  failed to provision volume: 
  failed to find provisioner "kubernetes.io/aws-ebs"

Your StorageClass references a provisioner that doesn’t exist or isn’t running.

Common Invalid Provisioners

Old in-tree provisioner names (deprecated):

# These don't work anymore in modern Kubernetes
provisioner: kubernetes.io/aws-ebs        # Old!
provisioner: kubernetes.io/gce-pd         # Old!
provisioner: kubernetes.io/azure-disk     # Old!

# Should be CSI drivers now:
provisioner: ebs.csi.aws.com              # New
provisioner: pd.csi.storage.gke.io        # New
provisioner: disk.csi.azure.com           # New

Typos:

provisioner: ebs.csi.aws.comm  # Extra 'm'
provisioner: ebs-csi-aws-com   # Wrong separators

Provisioner not installed:

provisioner: nfs.csi.k8s.io
# But the NFS CSI driver isn't installed in your cluster

How to Fix It

Step 1: Check what provisioners are available

# See installed CSI drivers
kubectl get csidrivers
NAME                    ATTACHREQUIRED   PODINFOONMOUNT
ebs.csi.aws.com         true             false

# This is your valid provisioner name

Step 2: Update StorageClass with correct provisioner

kubectl edit storageclass my-sc

# Change provisioner to match CSI driver name exactly
# From: provisioner: kubernetes.io/aws-ebs
# To:   provisioner: ebs.csi.aws.com

Step 3: If CSI driver doesn’t exist, install it

For AWS EBS:

kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.25"

For GCP PD:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/master/deploy/kubernetes/deploy-driver.yaml

For NFS:

helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system

My Cheat Sheet

I keep this in every cluster’s docs:

## Valid Provisioners in This Cluster

AWS EBS: ebs.csi.aws.com
AWS EFS: efs.csi.aws.com
NFS: nfs.csi.k8s.io

Check: kubectl get csidrivers

5. Incorrect Parameters in StorageClass

The Error

$ kubectl describe pvc my-pvc
Events:
  Warning  ProvisioningFailed  failed to provision volume: 
  rpc error: code = InvalidArgument desc = Invalid parameter "type": "gp7"

Your StorageClass has invalid parameters. The provisioner doesn’t understand what you’re asking for.

Common Parameter Mistakes

Invalid volume type:

parameters:
  type: gp7  # Doesn't exist! Valid: gp2, gp3, io1, io2, st1, sc1

IOPS too high for volume type:

parameters:
  type: gp3
  iops: "20000"  # gp3 max is 16,000!

Missing required parameters:

# io2 volumes REQUIRE iops parameter
parameters:
  type: io2
  # Missing: iops: "10000"

Wrong parameter names:

parameters:
  volumeType: gp3     # Wrong! Should be "type"
  encryption: "true"  # Wrong! Should be "encrypted"

Cloud Provider Specific Issues

AWS EBS:

# Common mistakes
parameters:
  type: gp3
  iops: "3000"           # Must be string, not number
  throughput: "125"      # Must be string
  encrypted: "true"      # Must be string "true", not boolean
  kmsKeyId: "arn:..."    # Full ARN required

GCP Persistent Disk:

parameters:
  type: pd-standard     # Valid: pd-standard, pd-balanced, pd-ssd
  replication-type: none  # Valid: none, regional-pd

Azure Disk:

parameters:
  skuName: Standard_LRS  # Valid: Standard_LRS, Premium_LRS, StandardSSD_LRS
  location: eastus       # Must match cluster region

How to Fix It

Step 1: Check CSI driver logs for exact error

kubectl logs -n kube-system -l app=ebs-csi-controller -c csi-provisioner --tail=50

Step 2: Look up valid parameters

Check the CSI driver documentation:

Step 3: Fix the StorageClass

kubectl edit storageclass my-sc

# Correct the parameters based on documentation

Step 4: Test with a new PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: my-sc  # Your fixed StorageClass
  resources:
    requests:
      storage: 1Gi  # Small for testing

Working Examples

AWS gp3 with encryption:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gp3-encrypted
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
  iops: "3000"
  throughput: "125"
  encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

AWS io2 high-performance:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: io2-fast
provisioner: ebs.csi.aws.com
parameters:
  type: io2
  iops: "10000"
  encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

GCP regional persistent disk:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gcp-regional
provisioner: pd.csi.storage.gke.io
parameters:
  type: pd-standard
  replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

6. VolumeBindingMode Misconfigured

The Problem

There are two VolumeBindingModes:

  • Immediate – Create volume as soon as PVC is created
  • WaitForFirstConsumer – Wait until a pod uses the PVC

Using the wrong one causes problems.

Issue 1: Zone Mismatch with Immediate Mode

This bit me hard in production.

# StorageClass with Immediate binding
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-storage
provisioner: ebs.csi.aws.com
volumeBindingMode: Immediate  # Creates volume immediately
parameters:
  type: io2
  iops: "10000"

What happens:

  1. You create a PVC
  2. Volume is created in random AZ (say, us-east-1a)
  3. Your pod gets scheduled in us-east-1b
  4. Volume can’t attach (wrong zone!)
  5. Pod stuck in ContainerCreating
$ kubectl describe pod my-pod
Events:
  Warning  FailedAttachVolume  AttachVolume.Attach failed: 
  volume is in us-east-1a but pod is scheduled in us-east-1b

The Fix: Use WaitForFirstConsumer

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-storage
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer  # Wait for pod!
parameters:
  type: io2
  iops: "10000"

With WaitForFirstConsumer:

  1. You create a PVC (stays Pending)
  2. You create a pod that uses the PVC
  3. Pod gets scheduled to a node (say, us-east-1a)
  4. Volume is created in the SAME zone (us-east-1a)
  5. Everything works!

Issue 2: Testing PVCs with WaitForFirstConsumer

# You create a PVC to test
kubectl apply -f pvc.yaml

$ kubectl get pvc
NAME      STATUS    VOLUME   CAPACITY   STORAGECLASS
test-pvc  Pending                       gp3

# Stays Pending forever - is it broken?

It’s not broken! It’s waiting for a pod. This confused me the first time I used WaitForFirstConsumer.

Test it properly:

# Create a test pod
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: test
    image: busybox
    command: ['sleep', '3600']
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: test-pvc  # References your PVC

Now the PVC will bind:

$ kubectl get pvc
NAME      STATUS   VOLUME                     CAPACITY   STORAGECLASS
test-pvc  Bound    pvc-abc123-def456          10Gi       gp3

When to Use Which Mode

Use WaitForFirstConsumer (recommended for cloud):

  • Multi-zone clusters
  • Cloud storage (EBS, GCE PD, Azure Disk)
  • When you want volume in same zone as pod

Use Immediate:

  • Single-zone clusters
  • Testing/development
  • When you need to pre-provision volumes
  • NFS or storage that’s zone-agnostic

My Default StorageClass Template

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
  encrypted: "true"
volumeBindingMode: WaitForFirstConsumer  # Always use this for cloud
allowVolumeExpansion: true
reclaimPolicy: Delete

I use this template in every cloud-based cluster. Saves so much debugging.

7. Immediate vs WaitForFirstConsumer Deep Dive

Let me show you the exact difference with a real scenario.

Scenario: Three-Node Cluster in Different Zones

Node 1: us-east-1a
Node 2: us-east-1b  
Node 3: us-east-1c

With Immediate Mode

volumeBindingMode: Immediate

Timeline:

T+0s:  Create PVC
T+1s:  Volume created in us-east-1a (random zone)
T+2s:  PVC shows Bound
T+30s: Create pod using this PVC
T+31s: Scheduler puts pod on Node 2 (us-east-1b)
T+32s: Try to attach volume from us-east-1a to node in us-east-1b
T+33s: FAIL - "volume is in different zone"

Result: Pod stuck forever. Volume in wrong zone.

With WaitForFirstConsumer

volumeBindingMode: WaitForFirstConsumer

Timeline:

T+0s:  Create PVC
T+1s:  PVC stays Pending (waiting)
T+30s: Create pod using this PVC
T+31s: Scheduler puts pod on Node 2 (us-east-1b)
T+32s: "Ah, pod is in us-east-1b, create volume there"
T+35s: Volume created in us-east-1b (same zone as pod!)
T+36s: Volume attaches successfully
T+37s: Pod starts running

Result: Everything works!

Visual Comparison

Immediate Mode:

PVC Created → Volume Created (random zone) → Pod Scheduled (might be different zone) → Problem!

WaitForFirstConsumer:

PVC Created → Pod Scheduled → Volume Created in same zone → Success!

The One Exception

Sometimes you want Immediate mode:

# Pre-provisioning volumes for a specific node
apiVersion: v1
kind: PersistentVolume
metadata:
  name: manual-pv
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - specific-node-name
  storageClassName: local-storage
  local:
    path: /mnt/disks/ssd1

But 99% of the time, use WaitForFirstConsumer.

Quick Troubleshooting Checklist

When PVCs won’t provision:

# 1. Does the StorageClass exist?
kubectl get storageclass

# 2. Is there a default if PVC doesn't specify one?
kubectl get sc | grep default

# 3. Check for multiple defaults
kubectl get sc -o json | jq -r '.items[] | select(.metadata.annotations["storageclass.kubernetes.io/is-default-class"]=="true") | .metadata.name'

# 4. Is the provisioner valid?
kubectl get csidrivers

# 5. Check StorageClass parameters
kubectl get sc <name> -o yaml

# 6. What's the VolumeBindingMode?
kubectl get sc <name> -o jsonpath='{.volumeBindingMode}'

# 7. Check PVC events
kubectl describe pvc <pvc-name>

# 8. Check CSI driver logs
kubectl logs -n kube-system -l app=ebs-csi-controller -c csi-provisioner

My StorageClass Setup Checklist

When setting up a new cluster:

1. Create standard StorageClass (fast):

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
  iops: "3000"
  throughput: "125"
  encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

2. Create default StorageClass (balanced):

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
  encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

3. Create slow/cheap StorageClass (archives):

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: slow
provisioner: ebs.csi.aws.com
parameters:
  type: sc1  # Cold HDD, cheapest
  encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

4. Verify setup:

kubectl get sc
# Should see: fast, standard (default), slow

kubectl get csidrivers
# Should see: ebs.csi.aws.com

5. Test:

# Create test PVC
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
EOF

# Create test pod
kubectl run test --image=busybox --restart=Never -- sleep 3600
kubectl set volume pod/test --add --name=data --type=persistentVolumeClaim --claim-name=test-pvc --mount-path=/data

# Check it binds
kubectl get pvc test-pvc
# Should show Bound within 30 seconds

# Cleanup
kubectl delete pod test
kubectl delete pvc test-pvc

Common Mistakes to Avoid

Mistake 1: Not Using allowVolumeExpansion

# Bad
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: ebs.csi.aws.com
# Missing: allowVolumeExpansion: true

Without this, you can’t resize volumes later. Always set it to true.

Mistake 2: Using Immediate in Multi-Zone Clusters

# Bad for multi-zone
volumeBindingMode: Immediate

Use WaitForFirstConsumer unless you have a specific reason not to.

Mistake 3: Forgetting to Set Default

# No default set
$ kubectl get sc
NAME       PROVISIONER         RECLAIMPOLICY
gp2        ebs.csi.aws.com     Delete
gp3        ebs.csi.aws.com     Delete

Always have one default StorageClass.

Mistake 4: Wrong Reclaim Policy

# Dangerous for production!
reclaimPolicy: Delete  # Deletes data when PVC is deleted

For important data, consider using Retain:

reclaimPolicy: Retain  # Keeps data even if PVC is deleted

Mistake 5: Not Testing StorageClass Before Using

Always test new StorageClasses with a simple PVC before using them in production.

Best Practices I Follow

1. Document Your StorageClasses

Keep a README in your cluster:

## StorageClasses in This Cluster

### standard (default)
- Type: gp3
- IOPS: 3000
- Encrypted: Yes
- Use for: General purpose, most workloads
- Cost: ~$0.08/GB/month

### fast
- Type: io2
- IOPS: 10000
- Encrypted: Yes
- Use for: Databases, high-performance apps
- Cost: ~$0.125/GB/month + IOPS charges

### slow
- Type: sc1
- Use for: Backups, archives, infrequently accessed data
- Cost: ~$0.045/GB/month

2. Use Consistent Naming

# Cloud provider prefix + type
gp3-encrypted     # AWS gp3 with encryption
pd-ssd            # GCP persistent disk SSD
azuredisk-premium # Azure premium disk

3. Always Set These Fields

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: my-sc
provisioner: ebs.csi.aws.com           # Required
volumeBindingMode: WaitForFirstConsumer # Set explicitly
allowVolumeExpansion: true              # Always true
reclaimPolicy: Delete                   # Set explicitly
parameters:
  encrypted: "true"                     # Always encrypt

4. Monitor StorageClass Usage

# See which StorageClasses are actually used
kubectl get pvc --all-namespaces -o custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,STORAGECLASS:.spec.storageClassName,SIZE:.spec.resources.requests.storage

# Count PVCs per StorageClass
kubectl get pvc --all-namespaces -o json | jq -r '.items[] | .spec.storageClassName' | sort | uniq -c

Real-World Examples

Example 1: Web Application Stack

# Fast storage for database
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-data
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: fast  # io2 with high IOPS
  resources:
    requests:
      storage: 100Gi
---
# Standard storage for application logs
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-logs
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: standard  # gp3
  resources:
    requests:
      storage: 20Gi
---
# Slow storage for backups
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: backups
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: slow  # sc1 cold storage
  resources:
    requests:
      storage: 500Gi

Example 2: Multi-Tenant Setup

# Gold tier (high performance)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gold
  labels:
    tier: premium
provisioner: ebs.csi.aws.com
parameters:
  type: io2
  iops: "16000"
  encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
# Silver tier (balanced)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: silver
  labels:
    tier: standard
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
  iops: "3000"
  encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
# Bronze tier (economy)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: bronze
  labels:
    tier: economy
provisioner: ebs.csi.aws.com
parameters:
  type: gp2
  encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

Final Thoughts

After years of fighting StorageClass issues, here’s what I’ve learned:

Most StorageClass problems come down to:

  1. StorageClass doesn’t exist (check with kubectl get sc)
  2. No default set (set one!)
  3. Wrong provisioner name (check kubectl get csidrivers)
  4. Invalid parameters (check CSI driver docs)
  5. Using Immediate instead of WaitForFirstConsumer (use Wait!)

My debugging process:

  1. Does StorageClass exist? (kubectl get sc)
  2. Is provisioner valid? (kubectl get csidrivers)
  3. What do the parameters say? (kubectl get sc <n> -o yaml)
  4. What’s the VolumeBindingMode? (should be WaitForFirstConsumer)
  5. Check CSI driver logs for details

Prevention checklist:

  • Always have exactly one default StorageClass
  • Use WaitForFirstConsumer for cloud storage
  • Set allowVolumeExpansion: true
  • Always encrypt volumes (encrypted: "true")
  • Document your StorageClasses
  • Test before production use

The four hours I spent debugging that first “StorageClass not found” error taught me to always verify StorageClasses exist before deploying anything. Now it’s the first thing I check in a new cluster.

Remember: StorageClass issues aren’t always obvious. A PVC sitting in Pending state could be because there’s no StorageClass, the wrong provisioner, invalid parameters, or a dozen other reasons. Start with the basics and work your way through systematically.

Have a StorageClass error that stumped you? Share it in the comments – I’m always learning new ways these can break!

Keywords: kubernetes storageclass errors, storageclass not found, default storageclass missing, multiple default storageclasses, invalid provisioner kubernetes, volumebindingmode kubernetes, waitforfirstconsumer vs immediate, kubernetes storage provisioning, storageclass parameters, kubernetes persistent storage

Leave a Comment