Practical Evolution of Docker + k3d for Frontend Developers
A practical guide for frontend engineers who don't want to be blocked by ops work
You might think Docker and Kubernetes are "backend stuff" that doesn't concern frontend developers. But the reality is:
- SSR rendering needs a stable Node.js environment
- Poster generation services depend on native libraries like
canvasorlibvips - Microservices frontend local debugging requires running five or six projects simultaneously
If you only know "npm run dev locally," you'll hit a wall. Your code works perfectly on your machine but breaks in production.
Don't worry—this article fills that gap.
1. A Story: Why Do Local and Production Always Differ?
If you've been coding for a while, you've probably run into these frustrating situations:
Scenario 1: The Node.js Version Curse
You develop locally with Node.js 18, and your project uses a native C++ module (like canvas). But the server has Node.js 16 with incompatible ABI—deployment crashes immediately.
Scenario 2: Missing System Dependencies
npm install runs smoothly on your Mac. Then you deploy to Linux and discover: missing fonts, missing libvips library. Build fails.
Scenario 3: "It Worked on My Machine!"
Backend says the API is fine, but you can't get it working locally. Or you discover after deployment that Nginx rewrite rules weren't configured correctly, causing weird 404s.
These problems all share the same root cause: development and production environments are different.
Containerization solves this—make your local environment identical to production.
2. Core Concepts: Images, Containers, and Pods Explained
2.1 Image
An image is like a pre-configured computer template with all software installed. Copy this template anywhere, and it looks the same.
For example, you package a Next.js app image my-app:v1 containing:
- Ubuntu OS
- Node.js 18
- Project code
- Dependencies
- Font files
- Config files
No matter whose computer or what OS, instances running from this image behave identically.
2.2 Container
A container is a running instance based on an image.
Analogy:
- Image = Recipe
- Container = The actual dish made from the recipe
You can run 10 containers simultaneously from the same image (like node:18) to handle traffic. These containers are isolated from each other.
2.3 Pod (K8s Specific)
A Pod is Kubernetes' smallest schedulable unit.
Most of the time, one Pod contains one container. But sometimes you'll encounter "Sidecar mode"—a Pod has two containers:
- Main container: runs your business code
- Sidecar container: collects logs, monitors, etc.
They share the same IP address and storage volumes.
3. The Evolution: From Docker to K8s
3.1 Stage One: Dockerfile—Single Service Blueprint
What is a Dockerfile?
It's a "recipe file" for building images. Just like you write a recipe to tell others how to cook a dish, a Dockerfile tells Docker how to build your application environment.
A simple example:
# Base image: Node.js 18
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy dependency files
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy source code
COPY . .
# Expose port
EXPOSE 3000
# Start command
CMD ["npm", "run", "start"]
Key points:
FROM node:18-alpine: Based on official Node.js image, no need to install Node yourselfRUN npm install: Install dependenciesCOPY . .: Copy code inCMD: What to execute when container starts
Dockerfile's role: Package file for a single service. Have 10 services? Have 10 Dockerfiles.
3.2 Stage Two: Docker Compose—Local Debugging "Commander"
When you have multiple services to run simultaneously:
- Frontend (Next.js)
- BFF layer (API aggregation)
- Render service (generates posters)
- Redis (cache)
- Database
Starting them one by one with docker run and configuring network connections is maddening.
docker-compose.yml solves this:
version: '3.8'
services:
frontend:
build: ./apps/frontend
ports:
- '3000:3000'
depends_on:
- bff
bff:
build: ./apps/bff
ports:
- '8080:8080'
environment:
- REDIS_HOST=redis
depends_on:
- redis
- render
render:
build: ./apps/render-service
ports:
- '9090:9090'
redis:
image: redis:7-alpine
ports:
- '6379:6379'
Start everything with one command:
docker-compose up
Docker Compose's role: Local development and simple single-machine deployment. Not for production.
3.3 Stage Three: K8s—"Fleet Commander" for Production
Docker Compose only works on a single machine. Production environments are brutal:
- Servers might lose power suddenly
- Traffic might spike to "Double Eleven" levels
- Need to schedule resources across multiple servers
Docker Compose can't handle this. You need Kubernetes (K8s).
What can K8s do?
- Auto-scheduling: Place containers on appropriate servers
- Self-healing: Container crashed? Auto-restart
- Dynamic scaling: Traffic spike? Automatically add more containers
- Rolling updates: Update versions seamlessly—start new ones, stop old ones, users don't notice
But K8s is too heavy? Use K3s and k3d
Standard K8s needs lots of configuration. K3s is a lightweight version, and k3d runs K3s inside Docker—simulate a complete K8s cluster locally.
# Create K3s cluster with one command
k3d cluster create my-cluster
# Delete cluster
k3d cluster delete my-cluster
4. Hands-On: Build Local Dev Environment with K3d
4.1 Create Cluster
# Create a 2-node cluster (local port 80 mapped)
k3d cluster create my-dev -p "80:80@loadbalancer" --agents 2
4.2 Deploy Application
# Import local image into cluster (important! Or K8s will try to pull from public registry)
k3d image import my-app:latest -c my-dev
# Deploy K8s config
kubectl apply -f deploy/k8s-manifests/
4.3 Hot Reload for Local Development
In development, you can't rebuild the image for every code change, right?
Solution: Use Volume to mount source code
# Add to K8s Deployment
spec:
containers:
- name: web
image: my-dev-image:latest
volumeMounts:
- name: source-code
mountPath: /app
volumes:
- name: source-code
hostPath:
path: /Users/you/project/src # Local code path
type: Directory
Now the container's /app directory maps to your local code directory—changes take effect immediately.
5. Debugging Commands: kubectl Cheat Sheet
5.1 Check Status
# List Pods
kubectl get pods
# List all resources
kubectl get all
# Get Pod details
kubectl describe pod <pod-name>
# Get Deployment details
kubectl describe deployment <deployment-name>
5.2 View Logs
# View logs
kubectl logs <pod-name>
# Follow logs in real-time (like tail -f)
kubectl logs -f <pod-name>
# View previous instance logs (before container restarted)
kubectl logs --previous <pod-name>
# View specific container logs (when Pod has multiple containers)
kubectl logs <pod-name> -c <container-name>
5.3 Access Container
# Get shell access
kubectl exec -it <pod-name> -- /bin/sh
# Run specific command
kubectl exec <pod-name> -- ls -la /app
# Access specific container
kubectl exec -it <pod-name> -c <container-name> -- /bin/sh
5.4 Network Debugging
# Port forward to local (debug internal services)
kubectl port-forward <pod-name> 8080:80
# View Services
kubectl get svc
# View Ingresses
kubectl get ingress
5.5 Troubleshooting
# View events (sorted by time)
kubectl get events --sort-by='.lastTimestamp'
# View nodes
kubectl get nodes
# View resource usage
kubectl top pods
kubectl top nodes
5.6 Useful Aliases (Productivity)
# Add to ~/.bashrc or ~/.zshrc
alias k='kubectl'
alias kg='kubectl get'
alias kd='kubectl describe'
alias kl='kubectl logs'
6. Project Structure: Monorepo + GitOps
A mature frontend project structure looks like this:
my-project/
├── .github/workflows/ # CI/CD: auto build and deploy after code push
├── apps/ # Business code
│ ├── frontend/ # Frontend (has its own Dockerfile)
│ ├── bff/ # BFF layer (has its own Dockerfile)
│ └── render/ # Render service (has its own Dockerfile)
├── packages/ # Shared code (TS types, utility functions, etc.)
└── deploy/ # Infrastructure config
├── docker-compose.yml # For local development
└── k8s/ # K8s deployment config
├── deployment.yaml
├── service.yaml
└── ingress.yaml
Workflow becomes:
- You
git pushcode to repository - CI automatically runs tests, builds image, pushes to registry
- CI triggers deployment, K8s cluster smoothly updates with new config
You don't manually log into servers and run commands—everything is automated.
7. FAQ
Q: What's the difference between Docker and VMs?
VMs include a complete computer (including OS), consuming more resources and starting slowly. Containers share the host's OS, making them lighter and faster to start.
Analogy:
- VM = Separate house (each has its own foundation, water, and electrical systems)
- Container = Apartment building (shared building utilities, but each unit is independent)
Q: Docker Compose vs K8s—when to use which?
- Local development + single-machine deployment → Docker Compose
- Production + multi-server cluster → K8s
Q: Is the K8s learning curve steep?
Initially there are concepts to learn (Pod, Service, Deployment, Ingress, etc.), but once you understand the core concepts, it's not hard. K3d lets you practice locally without real servers.
Q: What's the point of frontend devs learning containerization?
- SSR project local debugging
- Services requiring native libraries (poster generation, etc.)
- Microservices frontend local debugging environment
- Better communication with backend/ops colleagues
- Interview advantage
Summary
Containerization fundamentally solves an old problem: local and production environments are different.
With Docker + K8s, you can:
- Reproduce production environment locally 1:1
- Start multiple services with one command
- Auto-scale and self-heal
- Stop worrying about "it worked on my machine"
Learning containerization isn't about switching careers to ops—it's about making your own work smoother. When your SSR projects, poster services, and micro-frontends run easily on your own machine, you'll thank yourself for learning these skills today.
Comments
No comments yet. Be the first to share your thoughts!