How the system is designed, deployed, and observed in production.
Overview
Problem statement and what was built
iCampus DevOps Lab is a production-grade case study platform built to document and showcase DevOps and SRE projects the same way real engineering teams do — with architecture diagrams, CI/CD pipeline breakdowns, failure scenarios, and recovery paths.
Most engineering portfolios either show what was built (screenshots, GitHub links) without explaining production behaviour, or describe architecture in abstract terms without evidence. This platform was built to answer: "How does this system behave in production, and how does it fail?" — not "What did I build?"
It was designed and built end-to-end in a single cycle: requirements definition, data modelling, backend API design, frontend engineering, cloud infrastructure, database provisioning, image storage, authentication, CI/CD pipeline automation, custom domain configuration, and public deployment. The result is a live platform that functions simultaneously as a technical portfolio and a demonstration of the DevOps practices it documents.
Architecture
How services interact
The platform uses a serverless architecture optimised for low operational overhead, zero server management, and automatic scaling.
Layer breakdown:
- DNS: Namecheap — icampusdevopslab.xyz → A record 76.76.21.21
- CDN + Edge: Vercel Edge Network — global distribution, SSL termination
- Frontend: Next.js 15 (App Router) — server and client components
- API: Next.js API Routes — REST endpoints, no separate API server
- Database: Supabase PostgreSQL — managed, pooled via PgBouncer on port 6543
- File storage: Vercel Blob — images uploaded from admin, served via CDN
- CI/CD: GitHub Actions — lint, typecheck, build, deploy on push to main
- Admin auth: HTTP-only cookie + edge middleware — password-protected /admin route
Request lifecycle: User hits icampusdevopslab.xyz → DNS resolves to Vercel Edge → static pages served from ISR cache or dynamic requests routed to Next.js runtime → server components query Supabase directly via lib/queries/ (no HTTP round-trip) → pooled connection via PgBouncer (port 6543) avoids IPv6 ENETUNREACH and free-tier connection exhaustion.
Database schema uses 6 relational tables: projects (UUID PK, unique slug, denormalised like_count, featured flag), tags (normalised with hex color), project_tags (many-to-many join, ON DELETE CASCADE), gallery_images (one-to-many, position-ordered), likes (fingerprint-based dedup via SHA-256 of IP + user agent), and comments (author email stored but not displayed, approved flag for future moderation).
Key architectural decisions: Drizzle ORM over Prisma (lighter weight, no separate query engine), pooled PgBouncer connection for all app queries (direct port 5432 only for drizzle-kit push migrations), lazy Drizzle initialisation via Proxy pattern to prevent Invalid URL errors during module evaluation, force-dynamic on case study pages to serve fresh like/comment counts.
CI / CD pipeline
Build, test, and deploy automation
Two GitHub Actions workflows handle all automation:
CI Workflow (ci.yml) — triggered on every push and pull request to main:
1. Checkout (actions/checkout@v4)
2. Node.js 20 setup with npm cache (actions/setup-node@v4)
3. Clean install: npm ci
4. Lint: npm run lint (ESLint with Next.js config)
5. Typecheck: npx tsc --noEmit
6. Build: npm run build with all environment secrets injected
Deploy Workflow (deploy.yml) — triggered on merge to main and manual dispatch:
1. Checkout + Node.js setup
2. Install Vercel CLI globally
3. vercel pull — downloads .vercel/project.json linking repo to Vercel project
4. vercel build --prod — builds using Vercel's build system
5. vercel deploy --prebuilt --prod — uploads pre-built output
6. Post summary — writes deployment URL, commit SHA, and triggering actor to GitHub Actions job summary
Required secrets: VERCEL_TOKEN, VERCEL_ORG_ID, VERCEL_PROJECT_ID, DATABASE_URL (pooled, port 6543), BLOB_READ_WRITE_TOKEN, ADMIN_PASSWORD, NEXT_PUBLIC_APP_URL.
Observability
Logs, metrics, and alerting
Current observability is at the infrastructure level via managed services:
- Vercel dashboard: deployment logs, serverless function invocation logs, edge middleware logs, and build output per deployment
- Supabase dashboard: database query logs, connection pool utilisation, and storage metrics
- GitHub Actions: per-step CI/CD logs with deployment URL and commit SHA written to job summary on every deploy
Planned enhancements (Phase 4):
- Vercel Analytics for real user metrics (Core Web Vitals, page views)
- Sentry for error tracking and exception reporting
- Structured logging with request IDs for cross-service debugging
- Uptime monitoring via Better Uptime or UptimeRobot
- Full-text search observability via PostgreSQL tsvector query performance