Auto-Provisioning a Full Stack from a Single Deploy Command
What if spinning up a complete production environment — database, auth, storage, SSL, the works — took 90 seconds instead of an afternoon? That's what I built, and here's why it changed how I run my agency.
Standing up a new client project used to take me most of an afternoon. Create a Linux user. Set up directory structure. Provision a database. Configure Nginx. Generate SSL certs. Write a systemd unit. Set environment variables. Test the health endpoint. Hope I didn't forget anything.
I did this enough times that I started keeping a checklist. Then I automated the checklist. Then I realized the checklist was the product.
What "Provision" Actually Means
When I deploy a new project for the first time, here's what happens automatically:
Deploy "my-new-app"
│
├─ Create Linux user: my-new-app
├─ Create home directory: /home/my-new-app/
├─ Set up release rotation: /home/my-new-app/releases/
├─ Create log directory: /var/log/projects/my-new-app/
│
├─ Provision PostgreSQL database
│ └─ Set DATABASE_URL in project env
│
├─ Provision auth service (port + 1000)
│ └─ Set AUTH_URL, AUTH_JWT_PUBLIC_KEY in project env
│
├─ Provision MinIO storage bucket
│ └─ Set S3_ENDPOINT, S3_BUCKET, S3_ACCESS_KEY, S3_SECRET_KEY
│
├─ Provision Redis database (isolated DB number)
│ └─ Set REDIS_URL in project env
│
├─ Write systemd unit: hostkit-my-new-app.service
│ └─ Restart policy: always, 5-second delay
│
├─ Write Nginx config with reverse proxy
│ ├─ Route /auth/* → auth service
│ ├─ Route /* → app (assigned port)
│ └─ WebSocket upgrade support
│
├─ Generate SSL cert for my-new-app.hostkit.dev
│
└─ Rsync app files → first release directory
└─ Activate via atomic symlink
All of this from a single command. The app is live at https://my-new-app.hostkit.dev within about 90 seconds.
The Design Decisions
Per-project Linux users. Each project runs as its own user with its own home directory. This gives me process isolation without containers. If one project's Node process goes haywire, it can't touch another project's files.
Release directories, not in-place deploys. Each deploy creates a timestamped release directory. The app symlink points to the current release. Rollback is an atomic symlink swap — sub-second, no downtime. I keep the last 5 releases and garbage-collect the rest.
Port assignment from a pool. Each project gets an assigned port. Nginx handles the routing. No port conflicts, no manual tracking. The auth service gets port + 1000, payments gets port + 2000. Predictable and debuggable.
Environment variables are auto-set. The biggest source of "why isn't this working" bugs was missing env vars. Now, DATABASE_URL, AUTH_URL, S3_*, REDIS_URL, and PORT are all injected automatically. Developers only set app-specific variables like NEXT_PUBLIC_BASE_URL or STRIPE_API_KEY.
What I Got Wrong Initially
Version 1 tried to build on the VPS. I'd rsync the source code and run npm install && npm run build on the server. This worked until a Next.js build ate 3GB of RAM and OOM-killed a neighbor project. Now I build locally (or on a build machine) and deploy the pre-built standalone output. The VPS never runs npm install for production apps.
Version 1 didn't have health checks. I'd deploy and assume it worked. Then a client would email me that their site was down because the app crashed on startup due to a missing env var. Now every app requires a /api/health endpoint, and the deploy process polls it before declaring success. If health fails after 2 minutes, the deploy is marked as failed and I get an alert.
The Compound Effect
The real payoff isn't saving 3 hours on initial setup. It's that every project has identical infrastructure. Same directory layout, same service architecture, same debugging workflow. When something breaks at 11pm, I don't have to remember "wait, did I set up this one differently?" The answer is always no.
Eight projects. Same deploy command. Same rollback command. Same log locations. Same health check pattern. That consistency is worth more than any individual automation.
