DMI Internship
Infrastructure as Code — Terraform + Ansible on Azure
Provisioned a secure Azure VM with Terraform and automated the full install, deploy, and verify cycle with an Ansible multi-play playbook — deploying a static web application with HTTP verification as the final gate.

Overview
This project combined two industry-standard tools — Terraform for infrastructure provisioning and Ansible for configuration management — to deploy a static web application on Azure with clean separation of concerns between the two. The goal was not just a working deployment, but a repeatable, verifiable pipeline: one that ends with an automated assertion of correctness rather than a manual check.
Problem
Provisioning infrastructure and configuring what runs on it are distinct concerns. Mixing them — using a single script or doing either step manually — creates fragility and makes debugging harder when something goes wrong. The challenge was to demonstrate proper separation: Terraform owns the infrastructure layer, Ansible owns everything that happens on it, and neither crosses into the other's domain.
Architecture
Terraform provisions the full Azure environment from code:
- Resource Group —
rg-mini-finance - Virtual Network —
10.0.0.0/16with subnet10.0.1.0/24 - Network Security Group — inbound rules for TCP 22 (SSH) and TCP 80 (HTTP) only
- Static Public IP + NIC associated to the VM
- Ubuntu 22.04 VM (
Standard_B2ats_v2) — key-based SSH, password authentication disabled
Ansible takes over once the VM is reachable, running three plays in sequence:
Play 1 — Install apt update → install nginx + git → start and enable nginx
Play 2 — Deploy clone repo → sync to /var/www/html/ → set www-data ownership → reload nginx
Play 3 — Verify uri module → GET http://<public_ip> → assert status_code == 200
The verify play runs on localhost using the uri module. The playbook does not complete successfully unless the assertion passes.
Technologies Used
- Terraform — Infrastructure as Code, Azure provider
~> 3.100 - Ansible — Configuration management and deployment automation
- Azure — Resource Group, VNet, NSG, Public IP, NIC, Ubuntu 22.04 VM
- Nginx — Web server, managed by Ansible throughout
- GitHub Actions — CI pipeline: Terraform format check and Ansible syntax check on every push
Key Engineering Decisions
- Terraform outputs the public IP — surfaced as a named output after
terraform apply, so there is no ambiguity about which host to target in Ansible. No manual note-taking required. - NSG rules are explicit and minimal — only ports 22 and 80 are open. No broad CIDR rules, no default-permissive configurations left in place.
- Ansible handler for Nginx reload — the reload is triggered only when content actually changes, not on every run. Idempotent by design.
- Ownership set explicitly and recursively —
www-data:www-datawithrecurse: trueapplied to/var/www/html/to ensure Nginx can serve every file in the tree. - Verify play uses
assert— the playbook fails loudly with a clear message if the site is not reachable. Silent success is not accepted. terraform.tfvarsgitignored — onlyterraform.tfvars.exampleis committed. State files and provider binaries are excluded. The repository contains no environment-specific values or secrets.
Results
Terraform provisioned the full Azure environment in a single terraform apply. Ansible completed all three plays without failures. The verify play returned:
TASK [Assert HTTP 200 was returned]
ok: [localhost] => {
"msg": "Mini Finance site is reachable and returned HTTP 200"
}
Zero manual steps between terraform apply and a confirmed live site.
Debugging: The 403 Incident
The initial deployment completed without errors but returned a 403 Forbidden from the verify play.
Root cause: file ownership. Content had been synced to /var/www/html/ but Nginx could not serve it because the files were not owned by www-data. The file task in Play 2 sets owner: www-data, group: www-data, and recurse: true — but this was not applied correctly on the first run.
Fix: corrected the ownership task, redeployed, revalidated. The verify play returned HTTP 200.
This is the value of the verify play. The deployment "worked" by every other measure — no Ansible errors, no Terraform errors, VM reachable over SSH. Without the HTTP assertion, the failure would have required a manual browser check to find. With it, the fault was caught immediately, the fault domain was clear (configuration, not infrastructure), and the fix was targeted.
Key Learnings
Separation of concerns between Terraform and Ansible is not just organisational preference — it is what makes debugging tractable. When the 403 appeared, it was immediately clear the problem was in the configuration layer, not the infrastructure layer, because each tool had a distinct and non-overlapping responsibility. Without that separation, the debugging surface would have been the entire stack.
The verify play is the discipline that closes the loop. Automated infrastructure without automated verification is incomplete — you have defined the desired state but not confirmed it was reached.