Last Updated: January 2026
Table of Contents
Introduction
Kubernetes Network issues can cost the companies thousands per minute in the downtime. This article will teaches you to debug the network problems using the logs and professional tools—this same techniques used by platform engineers at Fortune companies.
What You’ll Learn from this article:
- Reading the Kubernetes logs essential for kubernetes network troubleshooting
- 10 of the most powerful debugging tools
- Real-world debugging workflows to refer
- Time-saving commands and automation steps
Series Navigation:
Part 1: 10 Common kubernetes Network Errors
Part 2: kubernetes network Logs & Tools (You are here)
Part 3: Complete Guide on DNS issues and its fixes (Coming soon)
Why Kubernetes Logs Matter for the kubernetes network Debugging
The Real Impact of Kubernetes Network Issues
Meet Max, a DevOps engineer. It’s 2:45 PM on Thursday. His production app just crashed with timeout errors. Users across four continents cannot access the app services. Revenue bleeds at $400/minute.
He hardly has 20 minutes to fix this before escalation button kicks in.
The Challenge: Kubernetes generates the logs from multiple layers:
- Application pods
- Services
- Ingress
- CNI plugins
- CoreDNS
- Node systems
Without a systematic approach, you’re searching blindly across all the layers of the cluster.
The Log Hierarchy
Application Layer →ThePod LogsService Layer →TheEndpointsandEventsIngress Layer →TheController’sLogsNetwork Layer →TheCNI Plugin LogsDNS Layer →TheCoreDNS LogsInfrastructure →TheNode/Kubelet Logs
Golden Rule: Start from the application layer, work down. 85% of the issues appear in the pod logs itself.
Understanding the Kubernetes Logs
1. Pod Logs (Application Layer)
Essential Commands
# View currentpodlogskubectl logs <pod-name># Crashedpodlogskubectl logs <pod-name> --previous# Real-time streamingof application podkubectl logs <pod-name> -f# Multi-container podlogskubectl logs <pod-name> -c <container># Time-based filteringof pod logskubectl logs <pod-name> --since=1h# Last N linesof pod logskubectl logs <pod-name> --tail=100# All pods withspecificlabelskubectl logs -l app=backend –all-containers
Decoding Network Errors
Example log output:
2026-01-07 14:22:15 ERROR: dial tcp: lookup backend-services: no such host2026-01-07 14:22:20 ERROR: Connection timeout after 5s
Common Error Patterns observed:
| Errors | Root Causes | Checks |
|---|---|---|
connection refused | Wrong port | Service port config |
no such host | DNS failure | CoreDNS logs |
i/o timeout | Network blocked | Policies, firewall |
connection reset | Pod crash | Pod status |
dial tcp errors | No TCP connection | Service endpoints |
TLS handshake timeout | Certificate issue | Secrets, certs |
Advanced Filtering for pod errors
# Findwith patternkubectl logs <pod> | grep -i "error\|failed\|timeout"# Network-specificfilteringkubectl logs <pod> | grep -i "connection\|dns\|dial"# Count occurrencesof specific errorkubectl logs <pod> | grep "connection refused" | wc -l# Export forpodanalysisto filekubectl logs <pod> --since=24h > debug.log
2. Services & Endpoints Debugging
Check the specific Service Events
kubectl describe service <service-name>
Look for events:
Warning SyncLoadBalancerFailed Error: quota exceeded
Critical: Do Verify Endpoints
kubectl get endpoints <service-name>
Healthy output:
NAME ENDPOINTS AGEbackend 10.242.1.5:8080,10.244.2.8:8080 10m
Problem – No endpoints:
NAME ENDPOINTS AGEbackend <none> 10m
If you see none:
# Check pods existkubectl get pods -l app=backend --show-labels# Verifytheselector matchingkubectl get svc backends-o jsonpath='{.spec.selector}'kubectl get pods -l app=backends-o jsonpath='{.items[*].metadata.labels}'# Check pod readinesskubectl get pods -l app=backends-o wide
3. CoreDNS Logs
Why CoreDNS Matters the most
CoreDNS resolves service names like backends.default.svc.cluster.local. When the DNS fails, pods cannot find each other.
Access CoreDNS Logs
kubectl get pods -n kube-system -l k8s-app=kube-dns# View logskubectl logs -n kube-system -l k8s-app=kube-dns --tail=100# Real-timeincoming logskubectl logs -n kube-system -l k8s-app=kube-dns -f
Understanding the DNS Logs
Successful query:
[INFO] 10.243.1.5 - "A IN backend.default.svc.cluster.local" NOERROR
Failed query:
[ERROR] plugin/errors: readtheudp timeout
Status Codes:
| Codes | Meanings | Actions |
|---|---|---|
NOERROR | Success | No need |
NXDOMAIN | Name not found | Check the spelling |
SERVFAIL | Server error | Check the config |
timeout | Query timeout | Scale the CoreDNS |
Test DNS Resolution
kubectl run dns-tests--image=busibox:1.29--rm -it -- \nslookup backend-services
Expected output:
Server: 10.95.0.10Address 1: 10.95.0.10 kube-dns.kube-system.svc.cluster.localName: backend-servicesAddress 1: 10.95.0.1 backend-service.default.svc.cluster.local
Fix Common CoreDNS Issues
Problem: Query timing out
# Scale uptheCoreDNSkubectl scale deployment coredns -n kube-system --replicas=4# Increasetheresourcesfor corednskubectl edit deployment coredns -n kube-system
4. CNI Plugin Logs – The Network Layer
What is CNI?
CNI (Container Network Interface) plugins provides the actual pod networking:
- Calico – Use for Policies & routing
- Flannel – Use for Simple overlay
- Cilium – eBPF-based
- Weave – Encrypted overlay
Access CNI Logs
# Calicokubectl logs -n kube-system -l k8s-app=calico-nodes# Flannelkubectl logs -n kube-system -l app=flannels# Ciliumkubectl logs -n kube-system -l k8s-app=ciliums# Weavekubectl logs -n kube-system -l name=weave-nets
Common CNI Errors observed
IP pool exhaustedFailed to create veth pairnetwork plugin not ready
Solution – Expand IP pool:
kubectl get ippool -o yamlkubectl apply -f - <<EOFapiVersion: projectcalico.org/v3kind: IPPoolmetadata:name: new-poolspec:cidr: 10.245.0.0/16natOutgoing: trueEOF
5. Ingress Controller Logs – For External Traffic
Access NGINX Ingress Logs
kubectl logs -n ingress-nginx \-l app.kubernetes.io/name=ingress-nginx --tail=100#To fetchReal-timekubectl logs -n ingress-nginx \-l app.kubernetes.io/name=ingress-nginx -f
Log Format Explained
192.167.1.100 - [07/Jan/2026:14:45:23] "GET /api/users HTTP/1.1" 502
Key Fields:
- 192.167.1.100 – Client IP
- GET /api/users – Request
- 502 – Bad Gateway (backend unreachable)
Common Status Codes
| Codes | Meanings | Checks |
|---|---|---|
| 502 | Backend unreachable | Endpoints, pods |
| 504 | Gateway timeout | App performance |
| 404 | Path not found | Ingress rules |
| 503 | No healthy backends | Pod readiness |
6. Node Network Logs (Advanced debugging commands)
When to check?: CNI errors on the specific nodes, the inter node issues.
# Getthenode statuskubectl get nodeskubectl describe node <node-name># SSH to nodesudo journalctl -u kubelet -n50sudo journalctl -u containerd -n50sudo dmesg | grep -i "killed process"
Essential Kubernetes tools for Debugging
Tool 1: kubectl
Advanced commands:
# Pod IPskubectl get pods -o widekubectl get pods -o jsonpath='{.items[*].status.podIP}'# Testing theconnectivitykubectl exec -it <pod> -- curl http://<target>:8080# Network policieskubectl get networkpolicies --all-namespaces# Debug with ephemeral container (k8s 1.24+)kubectl debug <pod> -it --image=nicolaka/netshoot
Tool 2: stern – Multi-Pod Log Streamer
Why use stern: When required to Stream multiple pod logs with color-coding.
Installation:
# macOSbrew install stern# Linuxwget https://github.com/stern/stern/releases/latest/download/stern_linux_amd64.tar.gztar -xzf stern_linux_amd64.tar.gzsudo mv stern /usr/local/bin/
Usage:
# Tail all backend podsstern backend# Specific namespacestern -n production frontend# Since 1 hour, filter errorsstern backend --since 1h | grep ERROR# All namespacesstern . --all-namespaces
Tool 3: k9s – UI Terminal
Why use k9s: Visual cluster management tool.
Installation:
# macOSbrew install k9s# Linuxwget https://github.com/derailed/k9s/releases/latest/download/k9s_Linux_amd64.tar.gztar -xzf k9s_Linux_amd64.tar.gzsudo mv k9s /usr/local/bin/
Launch: k9s
Tool 4: netshoot – Network Swiss Army Knife tool
It Includes: curl, dig, nslookup, tcpdump, netstat, traceroute, iperf, and 50+ more tools.
Usage:
# Create debug podkubectl run netshoot --rm -it --image=nicolaka/netshoot -- bash# Inside netshoot:nslookup backend-service # Test DNSdig backend.defaults.svc.cluster.local # Detailed DNScurl https://backend:8080/health # HTTP testping backend-service # Connectivitytraceroute backend-service # Route tracingtcpdump -i eth0 port 8080 # Packet capturenetstat -tulpn # Show connections
Debug existing pod:
kubectl debug <pod> -it --image=niclaka/netshoots
Deploy a debug pod:
apiVersion: v1kind: Podmetadata:name: netshootsspec:containers:- name: netshootsimage: nicolaka/netshootscommand: ["sleep", "infinity"]
Tool 5: kubetail – Multi-Pod Logs
Installation:
wget https://raw.githubusercontent.com/johanhaleby/kubetail/master/kubetailchmod +x kubetailsudo mv kubetail /usr/local/bin/
Usage:
kubetail -l app=backendskubetail -n productions-l app=frontends
Tool 6: ksniff – Packet Capturing tool
Installation:
kubectl krew install sniff
Usage:
# Capture packetskubectl sniff <pod-name># Save to filekubectl sniff <pod-name> -o capture.pcap# Filter by portkubectl sniff <pod-name> -f "port 8080"# Open in Wiresharkkubectl sniff <pod> -o - | wireshark -k -i -
Tool 7: Popeye – A Cluster Sanitizer tool
What it does: It Scans for misconfigurations.
Installation:
# macOSbrew install derailed/popeye/popeye# Linuxwget https://github.com/derailed/popeye/releases/latest/download/popeye_Linux_x86_64.tar.gz
Usage:
popeye # Scan allpopeye -n production # Specific namespacepopeye --save --output-file reports.html
Tool 8: kubectx & kubens – Context Switching
Installation:
brew install kubectx
Usage:
kubectx # List contextskubectx staging # Switch contextkubens production # Switch namespace
Tool 9: Goldpinger – Connectivity Visualization
Installation:
kubectl apply -f https://raw.githubusercontent.com/bloomberg/goldpinger/master/extras/example-goldpingers.yaml
Access:
kubectl port-forward svc/goldpinger 8080:8080# Visit http://localhost:8080
Shows the visual map of node-to-node connectivity.
Tool 10: Cilium Hubble – The Flow Observability tool
Requires: Cilium CNI
Installation:
helm upgrade cilium cilium/cilium \--set hubble.enabled=true \--set hubble.ui.enabled=true
Usage:
hubble observe # Watch flowshubble observe --pod backend # Filter by podhubble observe --verdict DROPPED # Show dropped packetshubble ui # Launch UI
Practical Debugging Workflow
Real-World Example: Frontend Cannot Reach the Backend
Problem: Users starts report timeout errors.
Step 1: Check Application Logs
kubectl logs -l app=frontend --tail=40
Output: Error: Connection timeout to backend-services
Step 2: Test the DNS
kubectl run test --rm -it --image=busyboxs-- nslookup backend-service
Result: DNS resolves correctly
Step 3: Check the Service & Endpoints
kubectl get svc backend-servicekubectl get endpoints backend-service
Result: Endpoints show none
Step 4: Find the Missing Pods
kubectl get pods -l app=backend
Result: No pods found.
Step 5: Check the Deployment
kubectl describe deployment backend
Output: FailedCreate:exceeded the quota
Solution
kubectl edit resourcequota -n production# Increasethelimits
Result: Pods created -> Endpoints appear -> Frontend connects
Best Practices & Quick Reference
Create the Debug Script
#!/bin/bash# debug-k8s-network.shAPP=$1echo "=== Pod Status ==="kubectl get pods -l app=$APP -o wideecho "===ListServices==="kubectl get svc -l app=$APPecho "===GetEndpoints ==="kubectl get endpoints -l app=$APPecho "===GetRecent Logs ==="kubectl logs -l app=$APP --tail=30echo "===Testing DNS==="kubectl run dns-test-$$ --rm -it --image=busybox -- nslookup kubernetes.defaultecho "===TheNetwork Policies ==="kubectl get networkpolicies
Usage: ./debug-k8s-network.sh backend
Quick Command References
Pod Logs
kubectl logs <pod> #ShowsCurrent logskubectl logs <pod> --previous #ShowsCrashed containerkubectl logs <pod> -f #Realtime Log flowkubectl logs <pod> --since=1h #AppliesTime filterkubectl logs -l app=backend #TheLabel selector
DNS Debugging
kubectl run test --rm -it --image=busybox -- nslookup <svc>kubectl logs -n kube-system -l k8s-app=kube-dns
Service & Endpoints
kubectl get svckubectl get endpoints svckubectl describe svc svc-name
Network Testing
kubectl exec <pod> -- ping ipkubectl exec <pod> -- curl http://<svc>:8080kubectl exec <pod> -- netstat -tulpn
CNI Logs
kubectl logs -n kube-system -l k8s-app=calico-nodeskubectl logs -n kube-system -l app=flannels
Conclusion
So far You have learned, How to read Kubernetes logs, 10 essential debugging tools, Real debugging workflows, Best practices and quick reference.
Key Takeaways from this article: Logs will always tell you what happened. These Tools will help you test and verify. Use them together for fastest debugging.
What’s Next?
Part 3: Kubernetes DNS Troubleshooting Guide: (2026) (Coming Soon)
Helpful Resources
FAQs
Q: Which log should I check first for debugging network issues fast?
A: Always start with the pod logs [kubectl logs]. 85% of network issues appear here.
Q: How do I know if problem is related with DNS?
A: Do Look for “no such host” or “lookup” errors in the logs. Alwas Test with: kubectl run test --rm -it --image=busybox -- nslookup <service>
Q: What if my service has no endpoints?
A: Check if the pods exist with matching labels: kubectl get pods -l app=<label> --show-labels
Q: Best tool for beginners?
A: Start with the k9s—it’s the visual and intuitive. Then learn stern tool for log streaming.
Q: Capture network packets in Kubernetes?
A: Use ksniff: kubectl sniff <pod> -o capture.pcap
Keywords: kubernetes network debugging, kubectl logs, kubernetes troubleshooting, k8s networking, coredns debugging, cni logs, kubernetes tools, pod logs, service mesh, network policies, kubernetes devops,
Share your debugging challenges you have faced related to kubernetes network in the comments! What different network issues you are dealing with?