DMI Internship
Capstone — EpicBook Dual-Pipeline DevOps Automation (Azure DevOps + Terraform + Ansible)
Designed and implemented a complete enterprise DevOps automation workflow for the EpicBook full-stack application using two separate repositories and two Azure DevOps pipelines — one for infrastructure provisioning with Terraform, one for configuration and deployment with Ansible.

Overview
This capstone project brings together every DevOps concept from the internship series into a single end-to-end workflow. The challenge: automate the full lifecycle of the EpicBook full-stack web application across two separate repositories and two separate Azure DevOps pipelines — one responsible for infrastructure, one responsible for configuration and deployment.
The two-repository, two-pipeline model is the enterprise standard. Infrastructure teams own the infra repo; application teams own the app repo. Each runs independently, with a defined handoff — Terraform outputs consumed by Ansible — that mirrors how real organisations separate infrastructure ownership from application ownership.
Problem
Deploying a multi-tier application manually — provisioning VMs, configuring databases, installing runtimes, deploying code, configuring reverse proxies — is error-prone and unrepeatable. The goal was to automate the entire workflow so that a single pipeline run provisions the infrastructure and a second pipeline run configures and deploys the application, with no manual steps between them.
Architecture
Repository 1: sqenchill/infra-epicbook (Terraform)
└── Infra Pipeline (Azure DevOps)
├── Ensure remote state backend (Azure Blob Storage)
├── terraform init (azurerm backend, SPN auth)
├── terraform validate
├── terraform plan → tfplan artifact
└── terraform apply → outputs:
app_public_ip
backend_private_ip
backend_public_ip
mysql_fqdn
Repository 2: sqenchill/theepicbook (Ansible + App Code)
└── App Pipeline (Azure DevOps)
├── Install Ansible
├── Download SSH key (Azure DevOps Secure Files)
├── Build Ansible inventory from Terraform outputs
├── ansible-playbook site.yml
│ ├── hosts: backend
│ │ roles: common → nodejs → epicbook
│ └── hosts: frontend
│ roles: common → nginx
└── HTTP health check → verify EpicBook is live
Infrastructure (Terraform — Azure):
- Resource Group:
rg-epicbook-capstone(West US 3) - Virtual Network:
vnet-epicbook(10.0.0.0/16) - Frontend Subnet:
10.0.1.0/24with public IP and NSG (TCP 22, TCP 80) - Backend Subnet:
10.0.2.0/24with NSG (TCP 22, TCP 8080 from VNet) - Frontend VM:
epicbook-frontend-vm(Ubuntu 24.04,Standard_B2als_v2) - Backend VM:
epicbook-backend-vm(Ubuntu 24.04,Standard_B2als_v2) - MySQL Flexible Server:
epicbookmysql123456(v8.0,B_Standard_B1ms, public firewall rule for Azure services) - Remote state: Azure Blob Storage (
tfstateepic27c8/tfstate)
Ansible Roles:
| Role | Host | Responsibilities |
|---|---|---|
| common | both | apt update, baseline packages, disable root SSH, disable password auth |
| nodejs | backend | Install Node.js 20, PM2 globally, clone EpicBook repo, npm install, deploy config.json from template |
| epicbook | backend | MySQL connection configuration, database and schema setup, PM2 process start |
| nginx | frontend | Install Nginx, deploy reverse proxy config, proxy port 80 → backend:8080 |
Traffic flow:
Browser → Frontend VM (Nginx :80) → proxy_pass → Backend VM (Node.js/PM2 :8080) → MySQL Flexible Server
Pipelines
Infra Pipeline
The infrastructure pipeline uses trigger: none — infrastructure changes are intentional, not triggered by every code push. The runApply parameter allows plan-only runs for review before committing to apply.
Authentication uses an Azure Resource Manager SPN Service Connection (epicbook-azure-rm). The addSpnToEnvironment: true flag exposes $servicePrincipalId, $servicePrincipalKey, and $tenantId inside the pipeline script, which are mapped to ARM_* environment variables — the pattern Terraform's azurerm provider and backend both require.
Remote state is managed via Azure Blob Storage. The pipeline creates the storage account and container idempotently before terraform init runs, so there is no external prerequisite.
App Pipeline
The app pipeline triggers on commits to main and accepts Terraform outputs as runtime parameters (APP_PUBLIC_IP, BACKEND_PUBLIC_IP, BACKEND_PRIVATE_IP, MYSQL_FQDN). This is the manual handoff point — in a mature pipeline, these would be passed automatically via pipeline variables or a shared artifact.
The SSH private key is stored as an Azure DevOps Secure File (epicbook-azure-key) and downloaded at runtime using the DownloadSecureFile@1 task — keeping credentials entirely out of the repository and pipeline YAML.
The final step executes a curl health check against http://<APP_PUBLIC_IP> and fails the pipeline if the HTTP response is not 200, providing a deployment gate.
Key Engineering Decisions
- Two-repository model — Infrastructure and application code are intentionally separated. The infra pipeline can run and succeed before anyone touches the app repo. This separation reflects how real teams operate and makes ownership boundaries explicit.
- Remote Terraform state — State is stored in Azure Blob Storage, not the local agent workspace. This is non-negotiable for any team or pipeline where multiple agents or operators might interact with the same infrastructure.
- SPN via Service Connection — Azure credentials are never written into pipeline YAML or stored as plain variables. The
epicbook-azure-rmService Connection manages the SPN lifecycle; the pipeline references it by name. - Secure Files for SSH keys — SSH private keys are uploaded to Azure DevOps Secure Files and downloaded at pipeline runtime. They exist on disk only during the pipeline run and are never committed to a repository.
- NSG scoping — Backend port 8080 is restricted to
VirtualNetworksource — not exposed publicly. Only the frontend Nginx proxy can reach the application server. The MySQL firewall rule allows Azure service access without exposing the database to the open internet. - Ansible inventory built from Terraform outputs — The inventory file is generated at pipeline runtime using the IPs produced by
terraform output. No static IP addresses exist in the repository. - HTTP health check as deployment gate — The pipeline does not assume success after
ansible-playbookcompletes. AcurlHTTP check confirms the application is actually serving traffic before the pipeline reports success.
Challenges and Resolutions
MySQL Flexible Server availability zone constraint
The initial Terraform configuration specified zone = "1" for the MySQL Flexible Server. The Azure subscription in westus3 did not have capacity in zone 1, causing the terraform apply to fail. Resolution: removed the fixed zone constraint to allow Azure to select an available zone automatically. This was an infrastructure constraint specific to the subscription and region — not a code error.
SSH key format mismatch
The infra pipeline at one point used an RSA key (ssh-rsa), while the app pipeline expected the ed25519 key stored as a Secure File. Resolution: aligned both pipelines to use a single ed25519 key pair (theepicbook-app-deploy), stored in Secure Files and referenced consistently.
Terraform state backend not pre-existing
Running terraform init against an azurerm backend requires the storage account and container to already exist. Resolution: added an AzureCLI@2 step before terraform init that creates the resource group, storage account, and blob container idempotently using az storage account create ... || true. This makes the pipeline self-bootstrapping — it works on first run against a fresh subscription.
Results
- Infra pipeline provisioned all Azure resources in a single run: Resource Group, VNet, frontend and backend subnets, NSGs, VMs, MySQL Flexible Server
- Terraform outputs published to pipeline log and mapped to
##vsovariables for downstream consumption - App pipeline configured both VMs via Ansible: common baseline, Node.js runtime, EpicBook application, Nginx reverse proxy
- EpicBook application accessed via browser at
http://<frontend-public-ip>— books displayed, backend database connectivity confirmed - HTTP health check step returned
200— pipeline reported success
Key Learnings
The two-pipeline model makes the handoff between infrastructure and application explicit rather than implicit. Terraform outputs become the contract between teams — the infra pipeline publishes IPs and FQDNs; the app pipeline consumes them. Understanding that boundary is what separates scripted deployment from disciplined DevOps practice.
Secure credential management is not optional at this level. SSH keys in Secure Files and SPNs in Service Connections are the correct patterns — not environment variables containing raw secrets or keys checked into repositories.
Remote Terraform state is required as soon as more than one agent or operator might touch the same infrastructure. Local state is a liability in any pipeline context.
Infrastructure constraints (availability zones, quota limits, regional capacity) are real and must be handled defensively in Terraform. Removing fixed zone constraints in favour of Azure-managed placement is often the correct production decision.