Last Updated: February 2026
Landing your first DevOps role can feel like standing at the base of Mount Everest with nothing but a hiking stick. I’ve been there, sweating through interviews while desperately trying to remember the difference between chmod and chown. After helping dozens of junior engineers break into DevOps over the past few years, I’ve noticed the same Linux questions keep popping up, interview after interview.
Here’s the thing: DevOps isn’t just about knowing commands. It’s about understanding how systems work together. But let’s be real—you need to nail those fundamental Linux questions first before you can impress anyone with your orchestration philosophies.
This guide Linux Interview Questions for DevOps Begineers covers the actual questions I’ve seen in 2026 interviews at companies ranging from scrappy startups to FAANG giants. No fluff, no outdated nonsense from 2015. Just practical stuff that interviewers are asking right now.
Why Linux Matters in DevOps (And Why You Should Care)
Before we dive into questions, let me share a quick story. Last month, a friend called me in a panic. His production server was down, users were screaming, and he was staring at a terminal like it was written in ancient hieroglyphics. The fix? A simple disk space issue that took 30 seconds once he knew which commands to run.
That’s the reality of DevOps. Linux isn’t just something you learn for interviews—it’s the foundation of literally everything you’ll touch. Docker containers? Running on Linux. Kubernetes clusters? Linux. CI/CD pipelines? You guessed it.
Most modern infrastructure runs on Linux because it’s stable, secure, and doesn’t cost a fortune in licensing. As a DevOps beginner, your Linux skills are basically your resume. Everything else you learn builds on top of this foundation.
The Interview Reality Check
Here’s what actually happens in DevOps interviews these days. Forget the “where do you see yourself in five years” nonsense. Interviewers want to know if you can:
- Debug a server issue at 2 AM without googling every command
- Write scripts that don’t accidentally delete production databases
- Understand system performance well enough to spot problems before users complain
- Explain technical concepts without sounding like a robot reading man pages
Most interviews follow a similar pattern: basic commands first, then file permissions, then networking, then troubleshooting scenarios. The good news? If you know the fundamentals cold, the rest becomes way easier.
Essential Linux Interview Questions For DevOps (The Ones That Actually Matter)
Basic Commands and File System Navigation
Question: Explain the Linux directory structure. What’s the purpose of /etc, /var, /home, and /tmp?
This question tests whether you understand Linux organization beyond just typing cd commands. Here’s the breakdown that actually makes sense:
The /etc directory is where all your configuration files live. Think of it as the control center. When you need to change how Apache behaves or modify your network settings, you’re heading to /etc. I spent my first week as a junior admin editing files in here (and yes, I broke things. Multiple times).
The /var directory is for variable data—stuff that changes while the system runs. Logs pile up in /var/log, web content often lives in /var/www, and mail queues hang out in /var/mail. When a disk fills up on a production server, nine times out of ten, /var is the culprit because some log file decided to grow to 50GB.
Your /home directory is personal space. Each user gets their own folder here to store their files, configurations, and that bash_history file that might embarrass you later. As a DevOps person, you’ll create service accounts that don’t use /home, but regular users need this.
Finally, /tmp is temporary storage that gets cleaned out when the system reboots. Perfect for scripts that need to create temporary files without cluttering up the system. Just remember: anything in /tmp can disappear, so never store anything important there (learned that lesson the hard way).
Question: What’s the difference between absolute and relative paths?
Absolute paths start from the root directory, like /home/ubuntu/scripts/deploy.sh. They work no matter where you are in the file system. Relative paths depend on your current location. If you’re in /home/ubuntu, then scripts/deploy.sh points to the same file.
Why does this matter? Scripts. If you hardcode absolute paths in your automation scripts, they’ll break when you move things around. But pure relative paths can fail when the script gets called from different directories. Most experienced DevOps folks use a mix—setting a base path variable at the top of scripts, then using relative paths from there.
Question: How do you find files in Linux?
The find command is your Swiss Army knife. Basic syntax looks like this:
find /var/log -name "*.log" -mtime -7
This searches /var/log for files ending in .log that were modified in the last 7 days. But find gets way more powerful. You can search by size, permissions, file type, and even execute commands on the results.
Real example from my last job: We needed to clean up old backup files eating disk space. Instead of manually hunting through directories, I used:
find /backups -name "*.tar.gz" -mtime +30 -delete
This found all gzip archives older than 30 days and deleted them. Saved hours of manual work. Just be careful with that -delete flag—test your find command first by removing it and checking the results.
The locate command is faster for simple filename searches because it uses a database, but you need to run updatedb first to refresh that database. In interviews, show you know both tools and when to use each.
File Permissions and Ownership
Question: Explain Linux file permissions. What does chmod 755 mean?
File permissions trip up a lot of beginners, but they’re actually logical once you get the pattern. Every file has three permission sets: owner, group, and others. Each set has three permissions: read, write, and execute.
When you see something like this:
-rwxr-xr-x
That first dash is the file type (- for regular file, d for directory). Then you get three groups of three characters. The first rwx means the owner can read, write, and execute. The second r-x means the group can read and execute but not write. The third r-x is the same for everyone else.
Now chmod 755 is the numeric shorthand. Each permission has a value: read is 4, write is 2, execute is 1. You add them up for each group:
- 7 = 4+2+1 (read, write, execute) for the owner
- 5 = 4+1 (read, execute) for the group
- 5 = 4+1 (read, execute) for others
This comes up constantly in DevOps. Script not running? Check if it’s executable. Application can’t write logs? Check the permissions on the log directory. Security audit failed? Someone probably gave 777 permissions to everything (please don’t do this).
Question: What’s the difference between chmod and chown?
chmod changes permissions—what people can do with a file. chown changes ownership—who the file belongs to. You’ll use both regularly.
Common scenario: You deploy an application that needs to write to a specific directory. The app runs as the www-data user, but the directory is owned by root. You need:
sudo chown -R www-data:www-data /var/www/app
sudo chmod -R 755 /var/www/app
The -R flag means recursive—apply this to all files and subdirectories. Without it, you’d only change the parent directory, and your app would still fail.
Question: What are sticky bits and SUID/SGID?
Okay, this is where permissions get interesting. These special bits give files extra capabilities beyond the normal read, write, execute.
SUID (Set User ID) on an executable means it runs with the permissions of the file owner, not the person executing it. The passwd command uses this—regular users can run it to change their password, but it needs root privileges to actually modify /etc/shadow.
SGID (Set Group ID) is similar but for groups. When set on a directory, new files created inside inherit the directory’s group instead of the creator’s default group. Super useful for shared project directories.
The sticky bit on a directory (like /tmp) means only the file owner can delete or rename their files, even if others have write permission to the directory. Prevents users from messing with each other’s temporary files.
You set these with chmod too:
chmod u+s file # Add SUID
chmod g+s directory # Add SGID
chmod +t directory # Add sticky bit
Or numerically, they’re the fourth digit: 4 for SUID, 2 for SGID, 1 for sticky bit.
Process Management
Question: How do you check running processes in Linux?
The ps command is your starting point, but most people jump straight to ps aux because it shows all processes with useful details:
ps aux | grep nginx
This shows all nginx processes, who’s running them, CPU and memory usage, and how long they’ve been running. The pipes and grep combo is something you’ll use about a thousand times a day.
For a live, updating view, top and htop are better. I prefer htop because it’s more readable and interactive, but top is guaranteed to be on every system. In an interview, mention both and explain that htop needs to be installed separately.
Want to find a specific process ID? pgrep is cleaner than ps piped to grep:
pgrep nginx
Need to kill a stubborn process? pkill sends signals by process name instead of making you look up the PID first. But in production, always try graceful shutdowns before forcing:
pkill -15 nginx # SIGTERM - polite request to stop
pkill -9 nginx # SIGKILL - nuclear option, last resort
Question: Explain the difference between a process and a thread.
A process is an independent program running in memory with its own resources. A thread is a lighter-weight execution unit within a process that shares resources with other threads in the same process.
Think of a process like a house. Each house (process) has its own address, utilities, and stuff. Threads are like people living in that house—they share the kitchen, bathroom, and living room, but can work on different tasks independently.
Why does this matter in DevOps? Resource consumption. Threads are cheaper than processes. A web server handling 1000 connections with threads uses way less memory than one spawning 1000 separate processes. But threads share memory space, so a bug in one thread can crash the entire process.
Question: How do you run a process in the background?
Add an ampersand at the end:
./long_running_script.sh &
But here’s the catch—if you log out, that process dies. For persistent background jobs, you need nohup:
nohup ./long_running_script.sh > output.log 2>&1 &
This runs the script, redirects both standard output and errors to output.log, and keeps running even after you disconnect. The 2>&1 part is crucial—it redirects stderr (file descriptor 2) to stdout (file descriptor 1), so you catch all the output in one log file.
In modern DevOps, you’d probably use systemd for long-running services, but knowing these basics shows you understand process management fundamentals.
Text Processing and Pipelines
Question: What’s the difference between grep, sed, and awk?
These are the holy trinity of text processing, and interviewers love asking about them because they reveal how well you understand Unix philosophy.
grep searches for patterns. That’s it. Simple, focused, does one thing well:
grep "ERROR" application.log
grep -r "TODO" /home/ubuntu/project/
sed is a stream editor for transforming text. Most commonly used for find-and-replace:
sed 's/old_text/new_text/g' file.txt
sed -i 's/localhost/production.server.com/g' config.yml
That -i flag edits the file in place, which is convenient but dangerous. Always test without it first.
awk is a full programming language for text processing. It’s overkill for simple tasks but unbeatable for complex column-based operations:
awk '{print $1, $3}' access.log # Print first and third columns
awk '$3 > 100 {print $0}' data.csv # Print rows where column 3 exceeds 100
In practice? I use grep for 80% of my text searching, sed for quick replacements, and awk when I need to manipulate structured data like CSV files or log columns.
Question: Explain pipes and redirections.
Pipes (|) connect commands by feeding one command’s output as the next command’s input:
cat large_file.log | grep "ERROR" | wc -l
This chains three commands: cat outputs the file, grep filters for errors, wc counts the lines. The result tells you how many error lines exist.
Redirections change where output goes. Greater-than sends stdout to a file:
ls -la > directory_contents.txt
Double greater-than appends instead of overwriting:
echo "New log entry" >> application.log
Less-than reads input from a file:
mysql database < schema.sql
The 2> redirect captures errors separately, which is super useful for debugging:
./deploy.sh > success.log 2> errors.log
Question: How would you find the top 10 most frequent IP addresses in an access log?
This question combines multiple skills and shows practical problem-solving. Here’s how I’d answer it:
awk '{print $1}' access.log | sort | uniq -c | sort -rn | head -10
Breaking this down:
- awk prints the first column (IP addresses)
- sort arranges them alphabetically
- uniq -c counts occurrences (only works on sorted input)
- sort -rn sorts numerically in reverse (highest counts first)
- head -10 shows top 10 results
In an interview, walk through your thought process like this. It shows you understand both the commands and how to combine them for real-world tasks.
Networking Basics
Question: How do you check if a port is listening?
The go-to command is netstat, though it’s being replaced by ss on newer systems:
netstat -tuln | grep :80
ss -tuln | grep :80
These show TCP and UDP listening ports (-t and -u), with numeric addresses (-n) for faster output. The -l flag filters for listening ports only.
Want to check if a remote port is open? Use telnet or nc (netcat):
telnet example.com 443
nc -zv example.com 443
The nc version is better because -z just scans without trying to send data, and -v gives you verbose output. If the port’s open, you get a success message. If not, you get a connection refused.
Question: Explain the difference between TCP and UDP.
TCP is like a phone call—reliable, ordered, connection-oriented. You establish a connection, exchange data, and close it properly. If packets get lost, TCP resends them. Order is guaranteed.
UDP is like shouting across a crowded room—fast, lightweight, connectionless. You blast out your message and hope it arrives. No handshakes, no guarantees, no resending lost packets.
When does this matter? TCP for anything that needs reliability: web traffic, file transfers, SSH connections. UDP for things that prioritize speed over perfection: video streaming, DNS lookups, gaming.
I once debugged a weird production issue where video quality was terrible. Turned out someone had tried to tunnel UDP traffic through a TCP connection, adding latency and causing stuttering. Understanding these protocols saved hours of troubleshooting.
Question: How do you trace the network path to a remote server?
traceroute (or tracepath) shows each hop between you and the destination:
traceroute google.com
Each line is a router along the path. You’ll see response times for three probes per hop. Asterisks mean that router didn’t respond (firewall rules, usually).
For basic connectivity testing, ping still works great:
ping -c 4 example.com
The -c flag limits it to 4 packets instead of running forever. Perfect for quick “is this server alive” checks.
More advanced? mtr combines ping and traceroute into a real-time display:
mtr google.com
This continuously tests the path and updates statistics. Much better for diagnosing intermittent network issues.
Shell Scripting Fundamentals
Question: Write a script to back up a directory and delete backups older than 30 days.
Here’s a practical solution I’d give in an interview:
#!/bin/bash
BACKUP_DIR="/backup"
SOURCE_DIR="/var/www/html"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="$BACKUP_DIR/backup_$DATE.tar.gz"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
# Create compressed backup
tar -czf "$BACKUP_FILE" "$SOURCE_DIR"
# Check if backup succeeded
if [ $? -eq 0 ]; then
echo "Backup created successfully: $BACKUP_FILE"
# Delete backups older than 30 days
find "$BACKUP_DIR" -name "backup_*.tar.gz" -mtime +30 -delete
echo "Old backups cleaned up"
else
echo "Backup failed!" >&2
exit 1
fi
Walking through this shows you understand:
- Variables and command substitution
- File operations and error checking
- The special $? variable (exit status of last command)
- Output redirection for errors
- Proper exit codes
Question: Explain what set -e and set -x do in shell scripts.
These are crucial for professional scripts. set -e makes the script exit immediately if any command fails. Without it, errors can cascade into disasters:
#!/bin/bash
set -e
cd /production/app
git pull origin main
npm install
npm run build
systemctl restart app
With set -e, if any step fails, the script stops instead of blindly continuing and potentially deploying broken code.
set -x turns on debug mode—it prints each command before executing it. Super helpful for troubleshooting:
#!/bin/bash
set -x
echo "Starting deployment"
./deploy.sh
You’ll see every command with expanded variables, making it obvious where things go wrong.
Pro tip: Combine them with set -euo pipefail for maximum safety. The -u flag fails on undefined variables, and pipefail makes pipes fail if any command in the chain fails (not just the last one).
Question: How do you handle command-line arguments in a bash script?
Positional parameters are your friend. $1 is the first argument, $2 is the second, and so on. $0 is the script name, and $# is the argument count:
#!/bin/bash
if [ $# -lt 2 ]; then
echo "Usage: $0 <source> <destination>"
exit 1
fi
SOURCE=$1
DEST=$2
echo "Copying from $SOURCE to $DEST"
cp -r "$SOURCE" "$DEST"
For more complex scripts with named options, getopts is the standard approach:
#!/bin/bash
while getopts "f:d:v" opt; do
case $opt in
f) FILE=$OPTARG ;;
d) DIR=$OPTARG ;;
v) VERBOSE=true ;;
*) echo "Invalid option"; exit 1 ;;
esac
done
echo "Processing file: $FILE in directory: $DIR"
This lets users run your script with flags like -f filename -d /path -v instead of relying on position alone.
System Performance and Troubleshooting
Question: How do you check disk usage and find what’s consuming space?
The df command shows filesystem-level usage:
df -h # Human-readable sizes
But df just tells you the disk is full, not what’s filling it. For that, use du:
du -sh /* # Size of each directory in root
du -ah /var | sort -rh | head -20 # Top 20 largest files/dirs in /var
Real-world scenario: Production server runs out of space at 3 AM. You SSH in, run df -h, see /var is at 100%. Then du -sh /var/* shows /var/log is 80GB. Then du -sh /var/log/* reveals application.log is 75GB. Crisis solved.
Modern alternative: ncdu is an interactive disk usage tool that’s way easier to navigate. But in interviews, stick with du and df since they’re universally available.
Question: What’s the difference between load average and CPU usage?
This confuses people constantly. CPU usage is straightforward—percentage of time the CPU is busy. Load average is trickier.
Load average represents the number of processes waiting for CPU time, averaged over 1, 5, and 15 minutes. You see three numbers like “1.5, 2.0, 1.8” from the uptime command.
On a single-core machine, a load average of 1.0 means the CPU is fully utilized. Above 1.0 means processes are waiting. On a four-core machine, 4.0 is full utilization.
Here’s the key insight: High load with low CPU usage usually means I/O bottleneck. Processes are waiting for disk or network, not CPU cycles. High CPU usage with normal load means you’re computationally bound. Different problems, different solutions.
Question: How would you diagnose a slow server?
This open-ended question reveals your troubleshooting methodology. Here’s the approach I recommend:
First, check the basics with top or htop. Look at load average, CPU usage, memory consumption. Is something obvious consuming all resources?
Second, check I/O with iostat:
iostat -x 1
High %iowait means disk bottleneck. The -x flag gives extended stats, and 1 means update every second.
Third, check network with iftop or nethogs to see what’s sending/receiving data.
Fourth, check logs. Always check logs:
tail -f /var/log/syslog
journalctl -f
The -f flag follows the log in real-time, so you see issues as they happen.
Finally, if nothing’s obvious, you might need strace to see system calls:
strace -p <process_id>
This shows exactly what a process is doing at the system level. Tons of output, but invaluable for debugging weird issues.
Walk through this logically in interviews. Show you have a systematic approach, not just “try stuff until something works.”
Package Management
Question: Explain the difference between apt and yum.
These are package managers for different Linux distributions. apt is for Debian-based systems like Ubuntu. yum (now dnf) is for Red Hat-based systems like CentOS and Fedora.
They do the same job—install, update, and remove software—but with different commands and package formats. Debian uses .deb packages, Red Hat uses .rpm.
Common apt commands:
apt update # Refresh package lists
apt upgrade # Update installed packages
apt install nginx # Install new package
apt remove nginx # Uninstall package
apt search keyword # Find packages
Equivalent yum/dnf commands:
yum check-update
yum update
yum install nginx
yum remove nginx
yum search keyword
In 2026, most Ubuntu systems use apt, and most enterprise environments run RHEL variants with dnf. Know both, mention that dnf is the modern replacement for yum.
Question: How do you install a package from source?
Sometimes packages aren’t in repositories, or you need a specific version. The classic approach:
wget https://example.com/software-1.2.tar.gz
tar -xzf software-1.2.tar.gz
cd software-1.2
./configure
make
sudo make install
The configure script checks dependencies and sets compilation options. make compiles the source code. make install copies binaries to system directories.
But here’s the problem: Software installed this way bypasses the package manager. No automatic updates, harder to uninstall, dependency tracking is manual.
Better approach in 2026? Use containers or create custom packages. If you must compile from source, at least use checkinstall to create a package:
./configure
make
sudo checkinstall
This generates a .deb or .rpm that the package manager can track.
Log Management
Question: Where are system logs typically stored in Linux?
Most logs live in /var/log. Specific locations vary by distribution and service, but common ones include:
- /var/log/syslog or /var/log/messages – General system logs
- /var/log/auth.log – Authentication attempts
- /var/log/nginx/ – Web server logs
- /var/log/apache2/ – Alternative web server logs
- /var/log/mysql/ – Database logs
Modern systems use systemd, which stores logs in a binary format accessed through journalctl:
journalctl -u nginx.service # Logs for nginx service
journalctl -f # Follow logs in real-time
journalctl --since "1 hour ago" # Recent logs
journalctl -p err # Only error-level and above
The advantage of journalctl is centralized logging with powerful filtering. The disadvantage is the binary format—you can’t just grep through files.
Question: How do you monitor logs in real-time?
The tail command with -f flag (follow) is essential:
tail -f /var/log/nginx/access.log
For multiple files simultaneously, use tail with multiple arguments:
tail -f /var/log/nginx/*.log
Want to filter while following? Pipe through grep:
tail -f /var/log/application.log | grep ERROR
For systemd logs, journalctl -f works the same way.
Advanced option: multitail lets you watch multiple logs in split-screen. But again, for interviews, stick with basics that exist everywhere.
Scenario-Based Questions (Where They Test Your Thinking)
Scenario 1: Disk Space Emergency
Question: A production server is out of disk space. Walk me through your response.
This tests your troubleshooting process under pressure. Here’s the systematic approach:
First, verify the issue:
df -h
Identify which filesystem is full. Then find what’s consuming space:
du -sh /* | sort -rh
Common culprits: log files, temp files, deleted but open files. For the last one, check:
lsof | grep deleted
If a process has a file open that’s been deleted, the space won’t be freed until that process closes the file or restarts.
Quick wins to free space:
- Clear old logs:
find /var/log -name "*.log" -mtime +30 -exec gzip {} \; - Clean package cache:
apt cleanoryum clean all - Remove old kernels:
apt autoremove(carefully!) - Empty trash:
rm -rf /root/.local/share/Trash/*
Then implement prevention:
- Set up log rotation with logrotate
- Add disk space monitoring with alerts
- Consider mounting /var or /var/log on separate partition
Walk through this methodically. Show you won’t panic and start randomly deleting files. Explain you’d document what you did and why.
Scenario 2: Compromised Server
Question: You suspect a server has been compromised. What do you check?
Security scenario questions test your awareness of threats and forensics basics.
First, preserve evidence if this is serious:
# Save current state
ps aux > /tmp/running_processes.txt
netstat -tuln > /tmp/open_ports.txt
Check for suspicious network connections:
netstat -tuln | grep ESTABLISHED
lsof -i # See which processes have network connections
Look for unusual processes:
ps aux | less # Review everything running
top # Sort by CPU or memory to spot anomalies
Check login history:
last # Recent logins
lastb # Failed login attempts
who # Currently logged in users
Review system logs:
grep -i "fail\|error\|unauthorized" /var/log/auth.log
journalctl -p err -n 100
Check for backdoors:
ls -la /tmp /var/tmp # Common hiding spots
find / -perm -4000 -type f # SUID files that shouldn't be
Examine cron jobs:
crontab -l # Current user
ls -la /etc/cron* # System cron jobs
If confirmed compromised: Isolate the server (firewall rules), preserve evidence for investigation, rebuild from clean backups, audit all credentials, patch the vulnerability that allowed compromise.
In interviews, emphasize you’d follow company incident response procedures rather than going rogue.
Scenario 3: Performance Degradation
Question: Users report the application is slow. How do you investigate?
This combines multiple skills into realistic troubleshooting.
Start with application-specific metrics if available. If not, work from the system up:
Check system load:
uptime
top
Is load high? Are specific processes consuming resources?
Check memory:
free -h
Is the system swapping heavily? That would explain slowness.
Check I/O:
iostat -x 1 10
High %iowait indicates disk bottleneck.
Check network:
iftop
netstat -s # Network statistics and errors
Check application logs for errors or slow queries:
tail -f /var/log/application.log
For web applications, check web server logs:
tail -f /var/log/nginx/access.log
Look for patterns: Specific URLs slow? Timeouts? Database connection errors?
The key in interviews is showing logical progression: System health → Resource bottlenecks → Application-specific investigation → Log analysis → Hypothesis → Solution.
Advanced Topics (To Stand Out)
Systemd Services
Understanding systemd sets you apart from candidates who only know traditional init scripts.
Question: How do you create a systemd service?
Create a unit file in /etc/systemd/system/:
sudo nano /etc/systemd/system/myapp.service
Basic service configuration:
[Unit]
Description=My Application
After=network.target
[Service]
Type=simple
User=www-data
WorkingDirectory=/opt/myapp
ExecStart=/usr/bin/python3 /opt/myapp/app.py
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
Enable and start it:
sudo systemctl daemon-reload
sudo systemctl enable myapp.service
sudo systemctl start myapp.service
Check status:
systemctl status myapp.service
journalctl -u myapp.service -f
This shows you understand service management beyond just “restart nginx.”
SSH and Security
Question: Explain SSH key authentication and why it’s better than passwords.
SSH keys use public-key cryptography. You generate a key pair: public key stays on the server, private key stays on your machine. When you connect, the server verifies you have the matching private key without transmitting it.
Advantages:
- Can’t be brute-forced like passwords
- Can be much longer than practical passwords
- Can be protected with a passphrase for extra security
- Easy to revoke (just remove public key from server)
- Enables automated authentication for scripts
Generate keys:
ssh-keygen -t ed25519 -C "your_email@example.com"
Copy to server:
ssh-copy-id user@server
Security best practices:
- Disable password authentication in /etc/ssh/sshd_config
- Change default SSH port
- Use fail2ban to block brute force attempts
- Regularly rotate keys
- Use ssh-agent to avoid typing passphrases repeatedly
Cron Jobs and Automation
Question: How do you schedule automated tasks in Linux?
cron is the time-based job scheduler. The crontab format is:
* * * * * command
│ │ │ │ │
│ │ │ │ └─── Day of week (0-7, Sunday is 0 or 7)
│ │ │ └───── Month (1-12)
│ │ └─────── Day of month (1-31)
│ └───────── Hour (0-23)
└─────────── Minute (0-59)
Examples:
# Every day at 2:30 AM
30 2 * * * /home/user/backup.sh
# Every Monday at 9 AM
0 9 * * 1 /home/user/weekly_report.sh
# Every 15 minutes
*/15 * * * * /home/user/check_service.sh
# First day of every month
0 0 1 * * /home/user/monthly_cleanup.sh
Edit crontab:
crontab -e
List current cron jobs:
crontab -l
Pro tips for cron jobs:
- Always use absolute paths in scripts
- Redirect output to a log file
- Set up email notifications for failures
- Test scripts manually before scheduling
- Remember cron runs with limited environment variables
Modern alternative: systemd timers offer more flexibility and better integration with systemd logging. But cron is universal and works everywhere.
Common Interview Mistakes to Avoid
After conducting dozens of interviews, I’ve noticed patterns in what trips people up. Here are the biggest mistakes and how to avoid them:
Not explaining your thinking. When asked a troubleshooting question, don’t just list commands. Walk through your reasoning: “I’d start by checking X because Y is usually the cause of Z.”
Memorizing commands without understanding. Interviewers can tell when you’ve memorized something versus actually understand it. Be ready to explain why a command works, not just how to use it.
Claiming you know everything. It’s okay to say “I haven’t used that specific tool, but I understand the concept and could learn it quickly.” Honesty impresses more than bluffing.
Ignoring security. In 2026, security isn’t optional. When discussing solutions, mention security implications. Chmod 777? Explain why that’s dangerous. Running as root? Acknowledge the risk.
Not asking clarifying questions. Real DevOps work is full of ambiguity. Asking questions shows you think critically. “Before I answer, what’s the system running? How many users? What’s the priority—speed or thoroughness?”
Overthinking simple questions. Sometimes the answer really is just “use grep.” Don’t overcomplicate to seem smart.
How to Actually Prepare for These Interviews
Reading this guide is a start, but you need hands-on practice. Here’s what actually works:
Set up your own lab environment. Spin up a few VMs or use free tier cloud instances. Practice everything in this guide. Break things on purpose and fix them. You learn more from failures than successes.
Build something real. Create a simple web app and deploy it on Linux. Set up monitoring, logging, backups, and automation. This gives you concrete examples to discuss in interviews.
Read others’ scripts. GitHub is full of DevOps automation. Read through popular repositories, understand how experienced engineers structure their scripts, and steal good patterns.
Use a cheat sheet, then wean yourself off. Create your own reference with commands you struggle to remember. Over time, you’ll need it less as muscle memory builds.
Practice explaining out loud. Talk through your troubleshooting process even when alone. Sounds weird, but it helps you articulate technical concepts clearly, which is crucial in interviews.
Join communities. The r/devops and r/linuxadmin subreddits, DevOps Discord servers, and local meetups connect you with people who’ve been through these interviews. Ask questions, share experiences, learn from others.
The Questions You Should Ask Them
Interviews go both ways. You’re evaluating whether this company is somewhere you want to work. Good questions reveal a lot:
“What does your infrastructure look like?” Tests whether they’ll be honest about technical debt and mess versus overselling.
“How do you handle on-call and incidents?” Reveals work-life balance and whether they have mature incident response.
“What does career growth look like for someone in this role?” Shows if they develop junior engineers or just want cheap labor.
“What’s your approach to automation and infrastructure as code?” Indicates whether they’re modern in their practices or still managing servers by hand.
“Can you describe a recent production incident and how it was handled?” Tells you about their culture—do they blame individuals or fix systems?
The way they answer these questions tells you as much as the technical discussion.
Frequently Asked Questions
How long does it take to prepare for a Linux DevOps interview?
Honestly? It depends on where you’re starting from. If you’re completely new to Linux, give yourself at least 2-3 months of consistent practice. An hour or two daily makes a huge difference. If you’ve been tinkering with Linux already, maybe a few weeks to polish your knowledge and practice articulating it.
I’ve seen people cram for a week and somehow land the job, but they struggled once they started. Better to take the time to actually understand things. You’re not just trying to pass an interview—you’re building a career.
The good news is Linux skills compound. Once you get the fundamentals down, everything else builds on top. Spend your first month getting really comfortable with the command line, file permissions, and basic troubleshooting. The rest follows more easily.
Do I need to know all Linux distributions for DevOps?
No, and anyone who says you do is lying or trying to intimidate you. Focus on understanding one distribution family well—either Debian-based (Ubuntu) or Red Hat-based (CentOS/Rocky Linux/RHEL).
Most concepts transfer between distributions. If you know apt on Ubuntu, learning dnf on RHEL takes maybe an afternoon. The core Linux principles—processes, permissions, networking—work the same everywhere.
In my experience, most companies use either Ubuntu for newer infrastructure or RHEL variants for enterprise stuff. Learn whichever matches the jobs you’re targeting. You can always pick up the other one later.
What’s more important: memorizing commands or understanding concepts?
Understanding concepts, hands down. Interviewers can spot memorization versus real knowledge instantly.
Here’s a test: If you can explain why a command works and what alternatives exist, you understand it. If you can only recite the syntax, that’s memorization.
Example: Knowing “chmod 755 file” is memorization. Understanding that 755 gives the owner full permissions while restricting others to read and execute, and explaining when you’d use 644 instead—that’s understanding.
Commands are googleable. Problem-solving ability isn’t. Focus on the “why” behind the commands, and you’ll naturally remember the syntax through practice.
Should I set up a home lab or use cloud platforms for practice?
Both have value, but I’d start with cloud free tiers. AWS, Google Cloud, and Azure all offer free instances that are perfect for learning. You get exposure to real infrastructure without hardware costs.
The advantage of cloud: You learn cloud platforms simultaneously, which is valuable since most modern DevOps happens in the cloud anyway. Plus, you can destroy and rebuild environments easily without worrying about breaking anything important.
Home labs are great if you want to learn networking more deeply or if you like physical hardware. But for pure Linux skills? Cloud is usually more practical and career-relevant.
Start with one free-tier VM and actually use it for projects. Don’t spin up ten servers and let them sit idle. Build something, break it, fix it, repeat.
What certifications should I get for DevOps?
Unpopular opinion: Start without certifications. Get your hands dirty first, build real skills, then consider certs if they help your specific career goals.
That said, if you want certifications, the Linux Foundation’s LFCS (Linux Foundation Certified System Administrator) is solid for Linux fundamentals. It’s practical—you actually perform tasks in a real environment, not multiple choice.
For cloud, AWS Solutions Architect Associate or the equivalent Google/Azure certs are valuable. Kubernetes certifications (CKA, CKAD) are worth it if you’re going the container route.
But here’s the thing: I’ve interviewed plenty of people with certifications who couldn’t troubleshoot basic issues. And I’ve hired people with zero certs who had impressive GitHub profiles showing they built real things.
Certs can open doors and get past HR filters. Just don’t mistake them for actual competence. They’re supplementary, not primary.
How do I gain experience if I can’t get a DevOps job without experience?
The classic catch-22, right? Here’s how to break it:
First, contribute to open source. Find projects that need help with CI/CD pipelines, deployment scripts, or infrastructure code. Real contributions to real projects count as experience.
Second, build your own projects and document them. Set up a blog on a VPS you configured yourself. Create a multi-tier application with proper monitoring and logging. Build something that solves a problem you actually have.
Third, look for adjacent roles. System administrator, cloud support, or junior developer positions let you touch infrastructure while building experience. DevOps is often a progression, not an entry point.
Fourth, network like crazy. Go to meetups, join online communities, help others with their problems. I’ve seen more people land jobs through connections than through cold applications.
Finally, consider contract or freelance work. Smaller projects, lower stakes, but you’re building real experience. Even helping local businesses with their infrastructure counts.
What are the biggest differences between working with Linux in development vs. production?
Production is scary. That’s the first thing. In development, mistakes are learning opportunities. In production, mistakes cost money and wake people up at 3 AM.
Production systems need monitoring, logging, backups, security hardening, and disaster recovery plans. Your development server? Probably has none of that. You might run everything as root in dev (don’t, but people do). In production, you follow the principle of least privilege religiously.
Change management is stricter in production. No “let me just quickly test this” directly on prod servers. Everything goes through proper testing, staging, and approval processes.
Performance matters more in production. A slow query that’s annoying in dev might crash production under real load. Resource constraints you ignored in dev become critical.
The good news? Understanding this distinction makes you valuable. Lots of developers don’t think about production realities. DevOps people who do become indispensable.
How important is shell scripting compared to learning Python or Go for DevOps?
You need basic shell scripting, period. It’s non-negotiable for DevOps work. Can you automate simple tasks? Parse log files? Write deployment scripts? If not, learn bash first.
That said, shell scripts get messy for complex logic. Once you’re comfortable with bash basics, pick up Python or Go for anything more sophisticated.
Python is great for: Complex text processing, interacting with APIs, data manipulation, automation that needs error handling and testing. Most DevOps teams use Python for serious automation.
Go is excellent for: Building CLI tools, system utilities, anything performance-critical. Increasingly popular for DevOps tooling.
My recommendation? Get solid with bash first since you’ll use it daily. Then learn Python—it’s more versatile and has endless DevOps libraries. Go is optional but valuable if you want to build tools yourself.
Don’t try to learn everything at once. Shell scripting for a month, then Python. You’ll be productive faster than trying to learn three languages simultaneously.
What’s the difference between DevOps and SRE, and which should I target?
DevOps is more of a culture and set of practices—breaking down silos between development and operations, automating everything, continuous improvement. It’s broader and more philosophical.
SRE (Site Reliability Engineering) is Google’s approach to operations with a more defined role. SREs treat operations as software problems, with error budgets, service level objectives, and heavy automation. It’s more prescriptive and engineering-focused.
In practice, the lines blur. A DevOps engineer at one company might do the same work as an SRE at another. Job titles are inconsistent across the industry.
For beginners, target DevOps roles. They’re more common, especially outside giant tech companies. SRE positions often want more experience and deeper system knowledge.
That said, study SRE principles even if you’re pursuing DevOps. Google’s SRE book is free online and teaches valuable concepts like error budgets and toil reduction.
How do I stay current with Linux and DevOps technologies?
The ecosystem moves fast, but don’t panic. Focus on fundamentals first—those don’t change much. The basics of Linux, networking, and system administration are largely the same as ten years ago.
For staying current on tools and trends:
- Follow key blogs (DevOps.com, The New Stack, vendor blogs)
- Subscribe to newsletters (DevOps Weekly, SRE Weekly)
- Join Reddit communities (r/devops, r/sre, r/linuxadmin)
- Participate in online communities (Discord servers, Slack groups)
- Attend virtual meetups and conferences
- Follow thought leaders on Twitter/LinkedIn
But here’s my actual advice: Don’t chase every new tool. Focus on understanding the problems tools solve. When you know the problems, learning new tools that solve them becomes much easier.
Kubernetes solves container orchestration. Understanding that problem is more valuable than memorizing kubectl commands. The next orchestrator that comes along? You’ll pick it up fast because you understand the underlying challenge.
Learn principles over tools, and you’ll never be obsolete.
What should I do if I freeze up during an interview?
First, breathe. Seriously. Take a moment, acknowledge you need to think, and organize your thoughts. Good interviewers appreciate thoughtfulness over rushed answers.
If you don’t know something, say so honestly: “I haven’t worked with that specific tool, but here’s how I’d approach learning it,” or “I’m not certain, but my intuition says…” and explain your reasoning.
For technical questions where you’re stuck, talk through your thought process. Even if you don’t reach the answer, showing how you’d work toward it demonstrates problem-solving ability.
Remember that interviewers have been in your shoes. Most are rooting for you to succeed. They want to hire someone, not trick you.
If you completely blank on something you know you should know, ask if you can come back to it. Answer other questions, build confidence, and circle back. Sometimes your brain needs a minute.
And honestly? Sometimes interviews just go badly. It happens to everyone. Learn from it, adjust, and move on to the next one. One bad interview doesn’t define your career.
Is it too late to start a DevOps career if I’m switching from a different field?
Absolutely not. I know successful DevOps engineers who came from teaching, retail management, the military, and even art school. Your previous experience often brings valuable perspectives others lack.
The challenge isn’t age or background—it’s catching up on technical skills. But that’s solvable with dedicated study and practice. Give yourself 6-12 months of serious learning, build a portfolio of projects, and you can make the switch.
Your previous career might actually help. Former teachers often excel at documentation and mentoring. Retail managers understand customer service and incident response. Project managers already know how to coordinate complex initiatives.
Highlight transferable skills in interviews. Problem-solving, communication, handling pressure, managing stakeholders—these matter as much as technical chops.
The tech industry talks about being welcoming to career changers, and while it’s imperfect, opportunities exist. Focus on building real skills, contribute to open source, network authentically, and you’ll find your way in.
Age is only a barrier if you let it be. I’ve seen 40-year-olds land their first DevOps role and thrive. What matters is your willingness to learn and adapt.
Additional Resources
External Resources
Internal Resources
- DevOps Career Roadmap for Engineering Students (2026)
- What Is DevOps? Real-World Explanation Without Buzzwords (2026 Guide)
Final Thoughts
Linux skills for DevOps aren’t just about passing interviews. They’re about being effective at your job, solving real problems, and not panicking when production breaks at midnight.
The questions in this guide reflect what’s actually being asked in 2026. Practice them, understand them deeply, and you’ll not only ace interviews but actually be good at the job afterward.
Remember: Everyone started as a beginner. The senior engineer who grills you in the interview? They were once googling “how to exit vim” just like everyone else. The difference is they kept learning, stayed curious, and put in the practice.
You can do this. Linux is learnable, DevOps is achievable, and your first role is closer than you think. Good luck out there.
About The Author
Kedar Salunkhe
DevOps Engineer | Seven years of fixing things that break at 2am
Kubernetes • OpenShift • AWS • Coffee
I’ve spent almost 7 years keeping production systems running, often when everyone else is asleep. These days I’m working with Kubernetes and OpenShift deployments, automating everything that can be automated, and occasionally remembering to document the things I fix. When I’m not troubleshooting clusters, I’m probably trying out new DevOps tools or explaining to someone why we can’t just “restart everything” as a debugging strategy
Have questions about any of these topics? Drop them in the comments. I read everything and respond when I can. And if this guide helped you land an interview or a job, let me know—those stories make my day.