Skip to main content
All Projects

DMI Internship

Production-Ready Dockerized Deployment — The EpicBook

April 2026Repository
DockerDocker ComposeNginxMySQLNode.jsAWSMulti-stage BuildDevOps

Containerized The EpicBook full-stack application using Docker Compose on AWS EC2 — multi-stage builds, split networks, MySQL health checks, NGINX reverse proxy, named volume persistence, structured logging, and environment-based secrets management. Deployed publicly on a t3.micro with minimal exposed ports.

Production-Ready Dockerized Deployment — The EpicBook

Overview

This capstone project takes The EpicBook — a Node.js / Express / MySQL bookstore application — from source code to a production-style deployment on AWS EC2 using modern containerization practices.

The objective was to integrate a full Docker module into a single deployable solution: multi-stage builds, Compose orchestration, service networking, persistent storage, health-checked startup sequencing, reverse proxy routing, environment management, and structured logging — then validate the deployment publicly on a cloud VM.

Architecture

User Browser
    ↓
NGINX Reverse Proxy  (port 80 — public)
    ↓
EpicBook App         (port 8080 — internal, public_net)
    ↓
MySQL 8.0            (port 3306 — internal, private_net only)

Two Docker networks enforce service isolation:

| Network | Services | Purpose | |---|---|---| | public_net | reverse-proxy, epicbook-app | Proxy-to-app traffic | | private_net | epicbook-app, mysql | App-to-database — DB not publicly reachable |

The database port is never published to the host or the internet. Only port 80 is publicly exposed.

Multi-Stage Dockerfile

The application image uses a two-stage build:

  • Builder stage (node:18) — full dependency install including devDependencies
  • Runtime stage (node:18-alpine) — production dependencies only, built artifacts copied from builder

Result: a lean Alpine-based runtime image with no build tooling, reduced attack surface, and faster pull times on deployment.

Compose Stack

Docker Compose orchestrates four concerns simultaneously:

  • Service definitions — reverse-proxy, epicbook-app, mysql
  • Split networkspublic_net and private_net
  • Named volumesdb_data for MySQL persistence
  • Health checks + startup orderdepends_on: condition: service_healthy prevents the app starting before MySQL is ready

Health Checks and Startup Order

MySQL health check:

healthcheck:
  test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
  interval: 10s
  timeout: 5s
  retries: 10

The application service uses depends_on: condition: service_healthy, so it only starts once MySQL passes 10 consecutive pings. This eliminates the startup race condition that causes connection errors when MySQL is still initialising.

Reverse Proxy and Routing

NGINX routes all traffic to the application container using its Compose service name — no hardcoded IPs:

location / {
    proxy_pass http://epicbook-app:8080;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

Logging

| Service | Destination | Method | |---|---|---| | NGINX access logs | ./logs/nginx/access.log | Bind mount to host | | NGINX error logs | ./logs/nginx/error.log | Bind mount to host | | Application logs | Container stdout | docker logs epicbook-app |

Bind-mounting NGINX logs to the host filesystem was the correct call for this deployment — it made diagnosing the 502 routing error significantly faster than inspecting inside the container.

Environment and Secrets Management

All credentials are externalised to .env — no secrets in source files or Compose YAML. The same Compose file runs in development and production; only the .env changes.

Variables managed: MYSQL_ROOT_PASSWORD, MYSQL_DATABASE, JAWSDB_URL, NODE_ENV, APP_PORT.

Cloud Deployment (AWS EC2)

  • Instance: Ubuntu 24.04, t3.micro
  • Security Group:
    • SSH (22) — restricted to admin IP only
    • HTTP (80) — public
    • HTTPS (443) — open (future TLS)
    • Port 3306 — not in Security Group (internal only)

Application validated publicly: UI served, product pages loading, cart and checkout functional, data persisting across docker compose down / up cycles.

Challenges and Resolutions

Docker Compose plugin not found docker-compose-plugin was unavailable via apt on Ubuntu 24.04. Resolution: used Docker Compose v2 (docker compose) built into the Docker CLI — no separate plugin required.

502 Bad Gateway on first deployment NGINX proxy_pass was configured to port 3000. The application listens on port 8080. NGINX logs (via bind mount) surfaced the upstream connection error immediately. Resolution: updated proxy_pass http://epicbook-app:8080, restarted the stack — resolved.

Key Engineering Decisions

  • Multi-stage build — devDependencies never enter the production image; Alpine runtime keeps the image minimal
  • Split networks — MySQL is unreachable from the proxy layer by design; only the app container sits on both networks
  • Named volumes over bind mounts for DBdb_data persists across full stack teardown; the database survives container recreation
  • Service-name DNShttp://epicbook-app:8080 works regardless of IP assignment; no hardcoding
  • Bind-mounted proxy logs — faster troubleshooting access without docker exec; logs survive container restarts

Key Learnings

Named volumes are the correct persistence mechanism for database data — not bind mounts and not ephemeral container storage. docker compose down destroys containers; volumes survive until explicitly removed with -v.

Health checks with service_healthy conditions are not optional for databases. Without them, the application starts before MySQL is ready and fails with connection errors on every cold start.

Externalising all secrets to .env from the start — before the first docker compose up — is far easier than retrofitting it. The Compose file and Dockerfile stay clean; only the .env varies per environment.