Kubernetes Network Debugging Guide(2026): Logs, Tools & Troubleshooting

Last Updated: January 2026

Introduction

Kubernetes Network issues can cost the companies thousands per minute in the downtime. This article will teaches you to debug the network problems using the logs and professional tools—this same techniques used by platform engineers at Fortune companies.

What You’ll Learn from this article:

  • Reading the Kubernetes logs essential for kubernetes network troubleshooting
  • 10 of the most powerful debugging tools
  • Real-world debugging workflows to refer
  • Time-saving commands and automation steps

Series Navigation:
Part 1: 10 Common kubernetes Network Errors
Part 2: kubernetes network Logs & Tools (You are here)
Part 3: Complete Guide on DNS issues and its fixes (Coming soon)

Why Kubernetes Logs Matter for the kubernetes network Debugging

The Real Impact of Kubernetes Network Issues

Meet Max, a DevOps engineer. It’s 2:45 PM on Thursday. His production app just crashed with timeout errors. Users across four continents cannot access the app services. Revenue bleeds at $400/minute.

He hardly has 20 minutes to fix this before escalation button kicks in.

The Challenge: Kubernetes generates the logs from multiple layers:

  • Application pods
  • Services
  • Ingress
  • CNI plugins
  • CoreDNS
  • Node systems

Without a systematic approach, you’re searching blindly across all the layers of the cluster.

The Log Hierarchy

Application Layer    → The Pod Logs
Service Layer        → The Endpoints and Events  
Ingress Layer        → The Controller’s Logs
Network Layer        → The CNI Plugin Logs
DNS Layer            → The CoreDNS Logs
Infrastructure       → The Node/Kubelet Logs

Golden Rule: Start from the application layer, work down. 85% of the issues appear in the pod logs itself.

Understanding the Kubernetes Logs

1. Pod Logs (Application Layer)

Essential Commands

# View current pod logs
kubectl logs <pod-name>

# Crashed pod logs
kubectl logs <pod-name> --previous

# Real-time streaming of application pod
kubectl logs <pod-name> -f

# Multi-container pod logs
kubectl logs <pod-name> -c <container>

# Time-based filtering of pod logs
kubectl logs <pod-name> --since=1h

# Last N lines of pod logs
kubectl logs <pod-name> --tail=100

# All pods with specific labels
kubectl logs -l app=backend –all-containers

Decoding Network Errors

Example log output:

2026-01-07 14:22:15 ERROR: dial tcp: lookup backend-services: no such host
2026-01-07 14:22:20 ERROR: Connection timeout after 5s

Common Error Patterns observed:

ErrorsRoot CausesChecks
connection refusedWrong portService port config
no such hostDNS failureCoreDNS logs
i/o timeoutNetwork blockedPolicies, firewall
connection resetPod crashPod status
dial tcp errorsNo TCP connectionService endpoints
TLS handshake timeoutCertificate issueSecrets, certs

Advanced Filtering for pod errors

# Find with pattern
kubectl logs <pod> | grep -i "error\|failed\|timeout"

# Network-specific filtering
kubectl logs <pod> | grep -i "connection\|dns\|dial"

# Count occurrences of specific error
kubectl logs <pod> | grep "connection refused" | wc -l


# Export for pod analysis to file 
kubectl logs <pod> --since=24h > debug.log

2. Services & Endpoints Debugging

Check the specific Service Events

kubectl describe service <service-name>

Look for events:

Warning  SyncLoadBalancerFailed  Error: quota exceeded

Critical: Do Verify Endpoints

kubectl get endpoints <service-name>

Healthy output:

NAME        ENDPOINTS                    AGE
backend     10.242.1.5:8080,10.244.2.8:8080  10m

Problem – No endpoints:

NAME        ENDPOINTS   AGE
backend     <none>      10m

If you see none:

# Check pods exist
kubectl get pods -l app=backend --show-labels

# Verify the selector matching
kubectl get svc backends -o jsonpath='{.spec.selector}'
kubectl get pods -l app=backends -o jsonpath='{.items[*].metadata.labels}'

# Check pod readiness
kubectl get pods -l app=backends -o wide

3. CoreDNS Logs

Why CoreDNS Matters the most

CoreDNS resolves service names like backends.default.svc.cluster.local. When the DNS fails, pods cannot find each other.

Access CoreDNS Logs

kubectl get pods -n kube-system -l k8s-app=kube-dns

# View logs
kubectl logs -n kube-system -l k8s-app=kube-dns --tail=100

# Real-time incoming logs
kubectl logs -n kube-system -l k8s-app=kube-dns -f

Understanding the DNS Logs

Successful query:

[INFO] 10.243.1.5 - "A IN backend.default.svc.cluster.local" NOERROR

Failed query:

[ERROR] plugin/errors: read the udp timeout

Status Codes:

CodesMeaningsActions
NOERRORSuccessNo need
NXDOMAINName not foundCheck the spelling
SERVFAILServer errorCheck the config
timeoutQuery timeoutScale the CoreDNS

Test DNS Resolution

kubectl run dns-tests --image=busibox:1.29 --rm -it -- \
  nslookup backend-services

Expected output:

Server:    10.95.0.10
Address 1: 10.95.0.10 kube-dns.kube-system.svc.cluster.local
Name:      backend-services
Address 1: 10.95.0.1 backend-service.default.svc.cluster.local

Fix Common CoreDNS Issues

Problem: Query timing out

# Scale up the CoreDNS
kubectl scale deployment coredns -n kube-system --replicas=4

# Increase the resources for coredns
kubectl edit deployment coredns -n kube-system

4. CNI Plugin Logs – The Network Layer

What is CNI?

CNI (Container Network Interface) plugins provides the actual pod networking:

  • Calico – Use for Policies & routing
  • Flannel – Use for Simple overlay
  • Cilium – eBPF-based
  • Weave – Encrypted overlay

Access CNI Logs

# Calico 
kubectl logs -n kube-system -l k8s-app=calico-nodes

# Flannel  
kubectl logs -n kube-system -l app=flannels

# Cilium
kubectl logs -n kube-system -l k8s-app=ciliums

# Weave
kubectl logs -n kube-system -l name=weave-nets

Common CNI Errors observed

IP pool exhausted
Failed to create veth pair
network plugin not ready

Solution – Expand IP pool:

kubectl get ippool -o yaml
kubectl apply -f - <<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: new-pool
spec:
  cidr: 10.245.0.0/16
  natOutgoing: true
EOF

5. Ingress Controller Logs – For External Traffic

Access NGINX Ingress Logs

kubectl logs -n ingress-nginx \
  -l app.kubernetes.io/name=ingress-nginx --tail=100

# To fetch Real-time
kubectl logs -n ingress-nginx \
  -l app.kubernetes.io/name=ingress-nginx -f

Log Format Explained

192.167.1.100 - [07/Jan/2026:14:45:23] "GET /api/users HTTP/1.1" 502

Key Fields:

  • 192.167.1.100 – Client IP
  • GET /api/users – Request
  • 502 – Bad Gateway (backend unreachable)

Common Status Codes

CodesMeaningsChecks
502Backend unreachableEndpoints, pods
504Gateway timeoutApp performance
404Path not foundIngress rules
503No healthy backendsPod readiness

6. Node Network Logs (Advanced debugging commands)

When to check?: CNI errors on the specific nodes, the inter node issues.

# Get the node status
kubectl get nodes
kubectl describe node <node-name>

# SSH to node
sudo journalctl -u kubelet -n 50
sudo journalctl -u containerd -n 50
sudo dmesg | grep -i "killed process"

Essential Kubernetes tools for Debugging

Tool 1: kubectl

Advanced commands:

# Pod IPs
kubectl get pods -o wide
kubectl get pods -o jsonpath='{.items[*].status.podIP}'

# Testing the connectivity
kubectl exec -it <pod> -- curl http://<target>:8080

# Network policies
kubectl get networkpolicies --all-namespaces

# Debug with ephemeral container (k8s 1.24+)
kubectl debug <pod> -it --image=nicolaka/netshoot

Tool 2: stern – Multi-Pod Log Streamer

Why use stern: When required to Stream multiple pod logs with color-coding.

Installation:

# macOS
brew install stern

# Linux
wget https://github.com/stern/stern/releases/latest/download/stern_linux_amd64.tar.gz
tar -xzf stern_linux_amd64.tar.gz
sudo mv stern /usr/local/bin/

Usage:

# Tail all backend pods
stern backend

# Specific namespace
stern -n production frontend

# Since 1 hour, filter errors
stern backend --since 1h | grep ERROR

# All namespaces
stern . --all-namespaces

Tool 3: k9s – UI Terminal

Why use k9s: Visual cluster management tool.

Installation:

# macOS
brew install k9s

# Linux
wget https://github.com/derailed/k9s/releases/latest/download/k9s_Linux_amd64.tar.gz
tar -xzf k9s_Linux_amd64.tar.gz
sudo mv k9s /usr/local/bin/

Launch: k9s

Tool 4: netshoot – Network Swiss Army Knife tool

It Includes: curl, dig, nslookup, tcpdump, netstat, traceroute, iperf, and 50+ more tools.

Usage:

# Create debug pod
kubectl run netshoot --rm -it --image=nicolaka/netshoot -- bash

# Inside netshoot:
nslookup backend-service          # Test DNS
dig backend.defaults.svc.cluster.local  # Detailed DNS
curl https://backend:8080/health   # HTTP test
ping backend-service              # Connectivity
traceroute backend-service        # Route tracing
tcpdump -i eth0 port 8080        # Packet capture
netstat -tulpn                    # Show connections

Debug existing pod:

kubectl debug <pod> -it --image=niclaka/netshoots

Deploy a debug pod:

apiVersion: v1
kind: Pod
metadata:
  name: netshoots
spec:
  containers:
  - name: netshoots
    image: nicolaka/netshoots
    command: ["sleep", "infinity"]

Tool 5: kubetail – Multi-Pod Logs

Installation:

wget https://raw.githubusercontent.com/johanhaleby/kubetail/master/kubetail
chmod +x kubetail
sudo mv kubetail /usr/local/bin/

Usage:

kubetail -l app=backends
kubetail -n productions -l app=frontends

Tool 6: ksniff – Packet Capturing tool

Installation:

kubectl krew install sniff

Usage:

# Capture packets
kubectl sniff <pod-name>

# Save to file
kubectl sniff <pod-name> -o capture.pcap

# Filter by port
kubectl sniff <pod-name> -f "port 8080"

# Open in Wireshark
kubectl sniff <pod> -o - | wireshark -k -i -

Tool 7: Popeye – A Cluster Sanitizer tool

What it does: It Scans for misconfigurations.

Installation:

# macOS
brew install derailed/popeye/popeye

# Linux
wget https://github.com/derailed/popeye/releases/latest/download/popeye_Linux_x86_64.tar.gz

Usage:

popeye                           # Scan all
popeye -n production            # Specific namespace
popeye --save --output-file reports.html

Tool 8: kubectx & kubens – Context Switching

Installation:

brew install kubectx

Usage:

kubectx                 # List contexts
kubectx staging         # Switch context
kubens production       # Switch namespace

Tool 9: Goldpinger – Connectivity Visualization

Installation:

kubectl apply -f https://raw.githubusercontent.com/bloomberg/goldpinger/master/extras/example-goldpingers.yaml

Access:

kubectl port-forward svc/goldpinger 8080:8080
# Visit http://localhost:8080

Shows the visual map of node-to-node connectivity.

Tool 10: Cilium Hubble – The Flow Observability tool

Requires: Cilium CNI

Installation:

helm upgrade cilium cilium/cilium \
  --set hubble.enabled=true \
  --set hubble.ui.enabled=true

Usage:

hubble observe                  # Watch flows
hubble observe --pod backend    # Filter by pod
hubble observe --verdict DROPPED # Show dropped packets
hubble ui                       # Launch UI

Practical Debugging Workflow

Real-World Example: Frontend Cannot Reach the Backend

Problem: Users starts report timeout errors.

Step 1: Check Application Logs

kubectl logs -l app=frontend --tail=40

Output: Error: Connection timeout to backend-services

Step 2: Test the DNS

kubectl run test --rm -it --image=busyboxs -- nslookup backend-service

Result: DNS resolves correctly

Step 3: Check the Service & Endpoints

kubectl get svc backend-service
kubectl get endpoints backend-service

Result: Endpoints show none

Step 4: Find the Missing Pods

kubectl get pods -l app=backend

Result: No pods found.

Step 5: Check the Deployment

kubectl describe deployment backend

Output: FailedCreate:exceeded the quota

Solution

kubectl edit resourcequota -n production
# Increase the limits

Result: Pods created -> Endpoints appear -> Frontend connects

Best Practices & Quick Reference

Create the Debug Script

#!/bin/bash
# debug-k8s-network.sh

APP=$1
echo "=== Pod Status ==="
kubectl get pods -l app=$APP -o wide

echo "=== List Services ==="
kubectl get svc -l app=$APP

echo "=== Get Endpoints ==="
kubectl get endpoints -l app=$APP

echo "=== Get Recent Logs ==="
kubectl logs -l app=$APP --tail=30

echo "=== Testing DNS ==="
kubectl run dns-test-$$ --rm -it --image=busybox -- nslookup kubernetes.default

echo "=== The Network Policies ==="
kubectl get networkpolicies

Usage: ./debug-k8s-network.sh backend

Quick Command References

Pod Logs

kubectl logs <pod>                    # Shows Current logs
kubectl logs <pod> --previous         # Shows Crashed container
kubectl logs <pod> -f                 # Realtime Log flow
kubectl logs <pod> --since=1h         # Applies Time filter
kubectl logs -l app=backend           # The Label selector

DNS Debugging

kubectl run test --rm -it --image=busybox -- nslookup <svc>
kubectl logs -n kube-system -l k8s-app=kube-dns

Service & Endpoints

kubectl get svc
kubectl get endpoints svc
kubectl describe svc svc-name

Network Testing

kubectl exec <pod> -- ping ip
kubectl exec <pod> -- curl http://<svc>:8080
kubectl exec <pod> -- netstat -tulpn

CNI Logs

kubectl logs -n kube-system -l k8s-app=calico-nodes
kubectl logs -n kube-system -l app=flannels

Conclusion

So far You have learned, How to read Kubernetes logs, 10 essential debugging tools, Real debugging workflows, Best practices and quick reference.

Key Takeaways from this article: Logs will always tell you what happened. These Tools will help you test and verify. Use them together for fastest debugging.

What’s Next?

Part 3: Kubernetes DNS Troubleshooting Guide: (2026) (Coming Soon)

Helpful Resources

FAQs

Q: Which log should I check first for debugging network issues fast?
A: Always start with the pod logs [kubectl logs]. 85% of network issues appear here.

Q: How do I know if problem is related with DNS?
A: Do Look for “no such host” or “lookup” errors in the logs. Alwas Test with: kubectl run test --rm -it --image=busybox -- nslookup <service>

Q: What if my service has no endpoints?
A: Check if the pods exist with matching labels: kubectl get pods -l app=<label> --show-labels

Q: Best tool for beginners?
A: Start with the k9s—it’s the visual and intuitive. Then learn stern tool for log streaming.

Q: Capture network packets in Kubernetes?
A: Use ksniff: kubectl sniff <pod> -o capture.pcap

Keywords: kubernetes network debugging, kubectl logs, kubernetes troubleshooting, k8s networking, coredns debugging, cni logs, kubernetes tools, pod logs, service mesh, network policies, kubernetes devops,

Share your debugging challenges you have faced related to kubernetes network in the comments! What different network issues you are dealing with?

Leave a Comment