How DevOps Works: Complete CI/CD Flow from Code Commit to Production 

Last Updated: January 2026

Last Tuesday, I watched a junior developer named Maya deploy her first feature to production. Her hands were shaking as she clicked the sync button in ArgoCD. Two minutes later, the feature was live, serving thousands of users. She looked at me with this mix of terror and excitement and said, “Wait, that’s it?” 

Yeah, that’s it. But let me break down everything that happened behind the scenes to make that two-minute deployment possible. 

How DevOps Works: The Real-World Workflow From Code to Production 

How DevOps Works: The Real-World Workflow From Code to Production 

I’m going to walk you through our actual deployment process at ShopFast, an e-commerce platform handling about 50,000 transactions daily. This isn’t theoretical—this is what happens every single day, multiple times a day. 

Stage 1: Developer Writes Code 

Tool: VS Code + Git 

Let’s follow Sarah, one of our backend developers. She’s been assigned a ticket to add a new feature: allow customers to apply gift cards during checkout. 

She creates a new branch from main: 

git checkout main 
git pull origin main 
git checkout -b feature/gift-card-payment 

She spends the next few hours writing code. The changes involve: 

  • A new service class GiftCardService.js to validate and process gift cards 
  • Updates to the checkout controller 
  • Database migration to add a gift_cards table 
  • Unit tests for the new functionality 

Her file structure looks like this: 

src/ 
 services/ 
   GiftCardService.js (new) 
 controllers/ 
   CheckoutController.js (modified) 
 tests/ 
   GiftCardService.test.js (new) 
migrations/ 
 20260119_add_gift_cards_table.sql (new) 

Stage 2: Commit and Push Code 

Tool: Git + GitHub 

Sarah’s satisfied with her changes. She’s run the tests locally, everything passes. Time to commit: 

git add . 
git commit -m "feat: add gift card payment support 

– Add GiftCardService to validate and deduct gift card balances 
– Update checkout flow to accept gift card as payment method 
– Add database migration for gift_cards table 
– Include unit tests with 95% coverage” 
 

Notice the commit message? We follow conventional commits format. The feat: prefix tells us this is a new feature. The detailed description helps reviewers understand what changed. 

She pushes to GitHub: 

git push origin feature/gift-card-payment 

Stage 3: Create Pull Request 

Tool: GitHub 

Sarah opens GitHub in her browser and creates a Pull Request (PR). GitHub automatically detects her branch and shows a “Compare & pull request” button. 

She fills out the PR template: 

## Description 
Implements gift card payment functionality for checkout process 
 
## Changes Made 
- Created GiftCardService with validation logic 
- Modified CheckoutController to handle gift card payments 
- Added database migration for gift_cards table 
- Added comprehensive unit tests 
 
## Testing 
- [x] Unit tests pass (95% coverage) 
- [x] Tested locally with test gift card codes 
- [x] Database migration runs successfully 
 
## Related Ticket 
Closes #SHOP-1247 

The moment she clicks “Create Pull Request“, several things happen automatically. 

Stage 4: Automated CI Pipeline Runs

Tool: GitHub Actions 

GitHub Actions immediately kicks in. We have a workflow file at .github/workflows/ci.yml that runs on every PR: 

name: CI Pipeline 
on: 
 pull_request: 
   branches: [main] 
 
jobs: 
 test: 
   runs-on: ubuntu-latest 
   steps: 
     - name: Checkout code 
       uses: actions/checkout@v4 
      
     - name: Setup Node.js 
       uses: actions/setup-node@v4 
       with: 
         node-version: '18' 
         cache: 'npm' 
      
     - name: Install dependencies 
       run: npm ci 
      
     - name: Run linting 
       run: npm run lint 
      
     - name: Run unit tests 
       run: npm run test 
      
     - name: Run integration tests 
       run: npm run test:integration 
      
     - name: Build application 
       run: npm run build 

What’s happening in this pipeline: 

Linting (10 seconds): ESLint checks for code style issues, unused variables, and potential bugs. It catches things like: 

  • Missing semicolons 
  • Unused imports 
  • Console.log statements left in code 
  • Inconsistent formatting 

Unit Tests (30 seconds): Jest runs all 342 unit tests including Sarah’s new ones: 

PASS  src/tests/GiftCardService.test.js 
 ✓ should validate gift card code format (5ms) 
 ✓ should reject expired gift cards (8ms) 
 ✓ should deduct correct amount from balance (6ms) 
 ✓ should handle insufficient balance (7ms) 
 
Test Suites: 47 passed, 47 total 
Tests:       342 passed, 342 total 
Time:        28.941s 

Integration Tests (2 minutes): These tests spin up Docker containers with PostgreSQL and Redis, then test the entire flow: 

  • Can we connect to the database? 
  • Does the checkout API endpoint respond correctly? 
  • Does gift card validation work with real database queries? 

Build (45 seconds): The application gets compiled and bundled. For our Node.js app, this means TypeScript compilation and creating production-ready assets. 

GitHub shows the status on the PR: 

  • ✅ Linting — Passed 
  • ✅ Unit Tests — Passed 
  • ✅ Integration Tests — Passed 
  • ✅ Build — Passed 

Stage 5: Code Review 

Tool: GitHub 

Sarah’s PR now needs human review. She assigns two reviewers: Mike (tech lead) and Jason (senior developer). 

Mike reviews the code and leaves comments: 

src/services/GiftCardService.js 
Line 47: Consider adding a check for negative amounts here.  
What happens if someone tries to use a gift card for -$50? 
 
src/controllers/CheckoutController.js   
Line 112: Nice error handling! This will give users clear  
feedback if their gift card is invalid. 

Sarah responds to Mike’s comment: 

Good catch! Added validation to reject amounts <= 0.  
Also added a test case for this scenario. 

She pushes another commit addressing the feedback: 

git add . 
git commit -m "fix: add validation for negative gift card amounts" 
git push origin feature/gift-card-payment 

GitHub Actions runs again (takes another 3-4 minutes). All checks pass. 

Jason reviews and approves: “LGTM! Great test coverage.” 

Mike approves: “Looks good, thanks for addressing the feedback.” 

Stage 6: Merge to Main Branch 

Tool: GitHub 

With two approvals and all checks passing, Sarah clicks the big green “Merge pull request” button. She selects “Squash and merge” to keep the commit history clean. 

GitHub combines all her commits into one: 

feat: add gift card payment support (#1247) 
 
- Add GiftCardService to validate and deduct gift card balances 
- Update checkout flow to accept gift card as payment method   
- Add database migration for gift_cards table 
- Include unit tests with 95% coverage 
 
Co-authored-by: Sarah Chen <sarah@shopfast.com

The code is now in the main branch. This triggers another GitHub Actions workflow. 

Stage 7: Build and Push Docker Image 

Tool: GitHub Actions + Docker + Amazon ECR 

We have a separate workflow for the main branch at .github/workflows/build-deploy.yml: 

name: Build and Deploy 
on: 
 push: 
   branches: [main] 
 
jobs: 
 build: 
   runs-on: ubuntu-latest 
   steps: 
     - name: Checkout code 
       uses: actions/checkout@v4 
      
     - name: Set up Docker Buildx 
       uses: docker/setup-buildx-action@v3 
      
     - name: Configure AWS credentials 
       uses: aws-actions/configure-aws-credentials@v4 
       with: 
         aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} 
         aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} 
         aws-region: us-east-1 
      
     - name: Login to Amazon ECR 
       id: login-ecr 
       uses: aws-actions/amazon-ecr-login@v2 
      
     - name: Build and push Docker image 
       env: 
         ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} 
         ECR_REPOSITORY: shopfast-checkout-service 
         IMAGE_TAG: ${{ github.sha }} 
       run: | 
         docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG . 
         docker tag $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG $ECR_REGISTRY/$ECR_REPOSITORY:latest 
         docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG 
         docker push $ECR_REGISTRY/$ECR_REPOSITORY:latest

What happens here: 

Docker Build: GitHub Actions reads our Dockerfile: 

FROM node:18-alpine 
WORKDIR /app 
COPY package*.json ./ 
RUN npm ci --only=production 
COPY dist/ ./dist/ 
EXPOSE 3000 
CMD ["node", "dist/server.js"] 

It creates an image containing: 

  • Node.js runtime 
  • Our application code 
  • Dependencies 
  • Configuration files 

The image is about 215MB compressed. 

Tag Image: The image gets tagged with: 

  • The git commit SHA: a7f3c92 (unique identifier) 
  • The latest tag (for convenience) 

Push to ECR: The image uploads to Amazon’s container registry. This takes about 40 seconds. 

Now we have a Docker image ready to deploy: 

123456789.dkr.ecr.us-east-1.amazonaws.com/shopfast-checkout-service:a7f3c92 

Stage 8: Update Kubernetes Manifest 

Tool: GitHub Actions + Git 

The workflow continues by updating our Kubernetes configuration repository: 

 - name: Update Kubernetes manifest 
       run: | 
         git clone https://${{ secrets.GIT_TOKEN }}@github.com/shopfast/k8s-manifests.git 
         cd k8s-manifests 
         sed -i "s|image: .*/shopfast-checkout-service:.*|image: $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG|" production/checkout-service/deployment.yaml 
         git config user.name "GitHub Actions" 
         git config user.email "actions@github.com" 
         git add . 
         git commit -m "Update checkout-service image to $IMAGE_TAG" 
         git push 

This updates the deployment manifest in our separate k8s-manifests repository. 

Before: 

apiVersion: apps/v1 
kind: Deployment 
metadata: 
 name: checkout-service 
 namespace: production 
spec: 
 replicas: 5 
 template: 
   spec: 
     containers: 
     - name: checkout-service 
       image: 123456789.dkr.ecr.us-east-1.amazonaws.com/shopfast-checkout-service:b2e1d84 
       ports: 
       - containerPort: 3000 

After: 

apiVersion: apps/v1 
kind: Deployment 
metadata: 
 name: checkout-service 
 namespace: production 
spec: 
 replicas: 5 
 template: 
   spec: 
     containers: 
     - name: checkout-service 
       image: 123456789.dkr.ecr.us-east-1.amazonaws.com/shopfast-checkout-service:a7f3c92 
       ports: 
       - containerPort: 3000 

Notice the image tag changed from b2e1d84 to a7f3c92. 

Stage 9: ArgoCD Detects Changes (But Doesn’t Auto-Deploy) 

Tool: ArgoCD 

ArgoCD is continuously monitoring our k8s-manifests repository. Every 3 minutes, it checks if the manifests in Git match what’s running in our Kubernetes cluster. 

After the manifest update, ArgoCD detects a difference: 

Git (Desired State): 

image: shopfast-checkout-service:a7f3c92 

Kubernetes (Current State): 

image: shopfast-checkout-service:b2e1d84 

ArgoCD’s UI shows the application status as “OutOfSync”: 

Application: checkout-service 
Status: OutOfSync 
Sync Status: Out of Sync (1 resource differs) 
Health: Healthy 
Last Synced: 2 hours ago 

Here’s the critical part: ArgoCD does NOT automatically sync for production. We’ve configured it with manual sync for production deployments because we want human oversight. 

Our ArgoCD Application configuration looks like this: 

apiVersion: argoproj.io/v1alpha1 
kind: Application 
metadata: 
 name: checkout-service 
 namespace: argocd 
spec: 
 project: production 
 source: 
   repoURL: https://github.com/shopfast/k8s-manifests 
   targetRevision: HEAD 
   path: production/checkout-service 
 destination: 
   server: https://kubernetes.default.svc 
   namespace: production 
 syncPolicy: 
   automated: null  # Manual sync required 
   syncOptions: 
   - CreateNamespace=true 

Notice automated: null? That means manual sync only. 

Stage 10: Manual Sync from ArgoCD 

Tool: ArgoCD Web UI 

This is where the human element comes in. Mike, our tech lead, gets a Slack notification: 

📦 New deployment ready for checkout-service 
Version: a7f3c92 
Changes: feat: add gift card payment support 
Status: OutOfSync – Manual sync required 
 
View in ArgoCD: https://argocd.shopfast.com/applications/checkout-service 

Mike opens ArgoCD in his browser. The dashboard shows: 

┌─────────────────────────────────────────────┐ 
│ checkout-service                            │ 
│ Status: OutOfSync                           │ 
│                                             │ 
│ Diff Preview:                               │ 
│ ~ Deployment/checkout-service               │ 
│   spec.template.spec.containers[0].image    │ 
│   - b2e1d84                                 │ 
│   + a7f3c92                                 │ 
│                                             │ 
│ [Sync] [Refresh] [History]                 │ 
└─────────────────────────────────────────────┘ 
 

Mike reviews: 

  • ✅ Checked the PR and code review 
  • ✅ Verified tests passed in CI 
  • ✅ Confirmed this is the gift card feature Sarah worked on 
  • ✅ Reviewed the diff—only the image tag changed 
  • ✅ Checked that it’s a reasonable time to deploy (2 PM, plenty of workday left if something goes wrong) 

He clicks the “Sync” button. 

A modal appears with sync options: 

┌─ Sync Application ────────────────────────┐ 
│                                            │ 
│ Sync Strategy:                             │ 
│ ◉ Normal (Apply)                          │ 
│ ○ Force (Replace)                         │ 
│                                            │ 
│ Prune: ☐ Delete resources not in Git      │ 
│ Dry Run: ☐ Preview only                   │ 
│                                            │ 
│ [Cancel]  [Synchronize]                    │ 
└────────────────────────────────────────────┘ 
 

Mike keeps the defaults and clicks “Synchronize”

Stage 11: ArgoCD Applies Changes to Kubernetes 

Tool: ArgoCD + Kubernetes 

ArgoCD immediately starts applying the changes. Here’s what happens behind the scenes: 

Step 1: ArgoCD runs kubectl apply 

kubectl apply -f production/checkout-service/deployment.yaml 

This tells Kubernetes: “Update the checkout-service deployment to use the new image.” 

Step 2: Kubernetes Initiates Rolling Update 

Kubernetes uses a RollingUpdate strategy (defined in our deployment): 

spec: 
 replicas: 5 
 strategy: 
   type: RollingUpdate 
   rollingUpdate: 
     maxUnavailable: 1 
     maxSurge: 1 
 

This means: 

  • We have 5 pods running 
  • Kubernetes can create 1 extra pod during update (maxSurge: 1) 
  • Kubernetes can have 1 pod unavailable during update (maxUnavailable: 1) 

The Rolling Update Process: 

T+0 seconds: 

Old pods (b2e1d84): [●] [●] [●] [●] [●]  (5 running) 
New pods (a7f3c92): [ ]                 (0 running) 

Kubernetes creates a new pod with the new image. 

T+15 seconds: 

Old pods (b2e1d84): [●] [●] [●] [●] [●]  (5 running) 
New pods (a7f3c92): [○]                 (1 starting – pulling image) 

The new pod pulls the Docker image from ECR. This takes about 10 seconds. 

T+25 seconds: 

Old pods (b2e1d84): [●] [●] [●] [●] [●]  (5 running) 
New pods (a7f3c92): [○]                 (1 starting – running health checks) 

The container starts, and Kubernetes begins health checks. 

Our deployment has readiness probes configured: 

readinessProbe: 
 httpGet: 
   path: /health 
   port: 3000 
 initialDelaySeconds: 10 
 periodSeconds: 5 
 failureThreshold: 3 
 

Kubernetes hits http://pod-ip:3000/health every 5 seconds. Our app responds: 

{ 
 "status": "healthy", 
 "database": "connected", 
 "redis": "connected", 
 "uptime": 15 
} 

T+35 seconds: 

Old pods (b2e1d84): [●] [●] [●] [●] [○]  (4 running, 1 terminating) 
New pods (a7f3c92): [●]                 (1 running and ready) 

Once the new pod passes health checks, Kubernetes: 

  • Starts routing traffic to it 
  • Terminates one old pod gracefully 

T+50 seconds: 

Old pods (b2e1d84): [●] [●] [●] [●]      (4 running) 
New pods (a7f3c92): [●] [○]              (1 running, 1 starting) 

Kubernetes creates a second new pod. 

This process continues… 

T+2 minutes: 

Old pods (b2e1d84): [●] [●]              (2 running) 
New pods (a7f3c92): [●] [●] [●]          (3 running) 
 

T+3 minutes: 

Old pods (b2e1d84): [ ]                  (0 running) 
New pods (a7f3c92): [●] [●] [●] [●] [●]  (5 running) 
 

All pods have been replaced with the new version! 

Stage 12: Verify Pods are Running with New Changes 

Tool: kubectl + Kubernetes Dashboard 

Mike now needs to verify the deployment was successful. He has several ways to check: 

Option 1: ArgoCD UI 

ArgoCD updates in real-time showing: 

┌─────────────────────────────────────────────┐ 
│ checkout-service                            │ 
│ Status: Synced ✓                            │ 
│ Health: Healthy ✓                           │ 
│                                             │ 
│ Resources:                                  │ 
│ ✓ Deployment/checkout-service               │ 
│   - 5/5 pods ready                          │ 
│   - Image: a7f3c92                          │ 
│                                             │ 
│ Last Synced: 2 minutes ago                  │ 
└─────────────────────────────────────────────┘ 
 

Option 2: kubectl command line 

Mike can run commands to verify: 

kubectl get pods -n production -l app=checkout-service 
 

Output: 

NAME                                READY   STATUS    RESTARTS   AGE 
checkout-service-7d9f8b6c5d-8xk2p   1/1     Running   0          2m 
checkout-service-7d9f8b6c5d-k9p3r   1/1     Running   0          2m 
checkout-service-7d9f8b6c5d-m4n7q   1/1     Running   0          1m 
checkout-service-7d9f8b6c5d-p2w8t   1/1     Running   0          1m 
checkout-service-7d9f8b6c5d-x5j9m   1/1     Running   0          1m 
 

All 5 pods are in “Running” status with “1/1” containers ready. 

Verify the image version: 

$kubectl describe pod checkout-service-7d9f8b6c5d-8xk2p -n production | grep Image: 

Output: 

Image: 123456789.dkr.ecr.us-east-1.amazonaws.com/shopfast-checkout-service:a7f3c92 

Perfect! The pods are running the new image. 

Check pod logs: 

kubectl logs checkout-service-7d9f8b6c5d-8xk2p -n production –tail=20 

Output: 

[2026-01-19T14:23:45.123Z] INFO: Server starting... 
[2026-01-19T14:23:45.456Z] INFO: Connected to PostgreSQL database 
[2026-01-19T14:23:45.789Z] INFO: Connected to Redis cache 
[2026-01-19T14:23:46.012Z] INFO: GiftCardService initialized 
[2026-01-19T14:23:46.234Z] INFO: Server listening on port 3000 
[2026-01-19T14:23:50.567Z] INFO: Health check passed 

The logs show the new GiftCardService initialized message—confirming our new code is running! 

Option 3: Check Kubernetes Events 

kubectl get events -n production –sort-by=’.lastTimestamp’ | grep checkout-service 

Output: 

3m   Normal   Scheduled       Pod    Successfully assigned production/checkout-service-7d9f8b6c5d-8xk2p 
3m   Normal   Pulling         Pod    Pulling image "shopfast-checkout-service:a7f3c92" 
3m   Normal   Pulled          Pod    Successfully pulled image 
3m   Normal   Created         Pod    Created container checkout-service 
3m   Normal   Started         Pod    Started container checkout-service 
2m   Normal   ScalingReplicaSet   Deployment   Scaled up replica set checkout-service-7d9f8b6c5d to 1 
1m   Normal   ScalingReplicaSet   Deployment   Scaled down replica set checkout-service-6c8a7b5d4e to 4 

These events show the rolling update progression—new pods scaling up, old pods scaling down. 

Stage 13: Smoke Testing in Production

Tool: Manual Testing + Postman 

Even though everything looks good in monitoring, Mike runs a quick manual smoke test. 

He uses Postman to hit the production API: 

Test 1: Health Check 

GET https://api.shopfast.com/health 
 
Response: 200 OK 
{ 
 "status": "healthy", 
 "version": "a7f3c92", 
 "services": { 
   "database": "connected", 
   "redis": "connected", 
   "giftcard": "available" 
 } 
} 

Test 2: Validate Gift Card (New Feature) 

POST https://api.shopfast.com/api/gift-card/validate 
Body: { 
 "code": "TEST-GIFT-100" 
} 
 
Response: 200 OK 
{ 
 "valid": true, 
 "balance": 100.00, 
 "currency": "USD" 
} 

Perfect! The new feature works. 

Test 3: Complete Checkout with Gift Card 

POST https://api.shopfast.com/api/checkout 
Body: { 
 "cart_id": "cart_xyz", 
 "payment": { 
   "type": "gift_card", 
   "gift_card_code": "TEST-GIFT-100" 
 } 
} 
 
Response: 200 OK 
{ 
 "order_id": "ORD-123456", 
 "status": "completed", 
 "amount_charged": 85.00, 
 "gift_card_balance_remaining": 15.00 
} 
 

It works! A customer just paid with a gift card successfully. 

Stage 14: Announce Deployment 

Tool: Slack 

Mike posts in the #deployments channel: 

✅ checkout-service deployed to production 
 
Version: a7f3c92 
Feature: Gift card payment support 
Deploy Time: 14:25 UTC 
Duration: 3 minutes 
Status: Healthy ✓ 
 
Metrics: 
- Error rate: 0.02% (normal) 
- Response time: 124ms p95 (normal) 
- All 5 pods running new version 
 
Deployed by: @Mike (approved by @Jason @Mike) 
PR: https://github.com/shopfast/checkout-service/pull/1247 

Sarah sees the message and feels that rush of excitement—her code is live! 

The Complete Workflow Summary 

how devops works

The Tools That Make It Work 

Why Manual Sync for Production? 

You might wonder: “Why not auto-deploy to production?” 

We actually do auto-deploy to staging and development environments. But production is different: 

Reasons for manual production deploys: 

  1. Business timing: We don’t deploy during peak traffic (Black Friday, lunch hour) 
  1. Human oversight: A second pair of eyes catches potential issues 
  1. Coordination: Some deploys need database migrations or coordination with other teams 
  1. Rollback preparation: We ensure on-call engineers are available before deploying 
  1. Risk management: Production serves real customers real money—we want control 

That said, our manual process is still fast. From clicking “Sync” to live in production: 3-5 minutes. 

Frequently Asked Questions (FAQ) 

How does DevOps work in Real Companies? 

In IT companies, DevOps works through automated CI/CD pipelines where developers commit code, raise pull requests, run automated tests, build Docker images, and deploy applications to Kubernetes using GitOps tools like ArgoCD, with monitoring and manual approvals for production. 

What happens after when developer merges code to main branch? 

After merging, CI/CD pipelines automatically build the application, create Docker images, push them to a container registry, update Kubernetes manifests, and trigger deployment workflows monitored by tools like ArgoCD. 

Why do companies use the manual deployment approval in production? 

Manual approvals reduce risk by ensuring business timing, rollback readiness, and human verification before deploying changes that affect real customers and revenue. 

Which tools are commonly used in a DevOps workflow? 

Common tools include GitHub, GitHub Actions, Docker, Kubernetes, ArgoCD, Helm, Prometheus, Grafana, and Slack for communication. 

How long does the DevOps deployment take? 

In mature DevOps setups, deployments typically take 3–15 minutes from merge to production, depending on testing, approval, and verification steps. 

Additional Resources (Internal + External) 

Internal Resources 

External Resources

Final Thoughts: How DevOps Really Works 

DevOps is not just about automation or tools — it’s about building the confidence in every deployment
This real-world workflow shows that how modern DevOps teams safely move code from a developer’s laptop to the production using CI/CD pipelines, containerization, Kubernetes, and GitOps practices. 

By combining the automation with human oversight, teams can release features faster while maintaining stability, security, and reliability at scale. 
If you’re learning DevOps or designing the production-ready pipelines, this workflow represents how DevOps actually works in real companies today


About the Author 

Kedar Salunkhe 

Devops | Kubernetes | AWS | Docker 

With 7+ years of hands-on experience designing, deploying, and maintaining production-grade infrastructure. Specialized in building scalable and secure CI/CD pipelines, managing container platforms, and implementing GitOps workflows for enterprise environments.

Leave a Comment