Connection Pooling Nightmares: How Serverless Killed My Database
If you're using Prisma with Next.js App Router and haven't thought about connection pooling, you probably have a ticking time bomb in production. Here's how mine went off — and the fix that took three lines of code.
I deployed a client portal on a Friday afternoon. By Monday morning, the database was refusing connections. PostgreSQL's default limit is 100 connections, and somehow a single Next.js app was trying to hold 80 of them.
The culprit: Next.js App Router's serverless-like execution model, combined with Prisma's default connection handling.
The Symptom
PrismaClientInitializationError: Can't reach database server at `localhost:5432`
Connection pool timeout exceeded
The app would work fine for a few requests, then start throwing connection errors. Restarting the Node process would fix it temporarily. Classic pool exhaustion.
Why It Happens
Next.js App Router treats each route handler and server component as a potentially independent execution context. In development with hot module reloading, every code change creates a new module scope — and with it, a new Prisma client instance, each opening its own connection pool.
But even in production, if you instantiate PrismaClient inside a route handler or a utility function that gets re-imported, you can end up with multiple instances, each maintaining its own pool.
// DON'T DO THIS — new client every import
export function getDb() {
return new PrismaClient() // 💀 new pool every call
}
Prisma's default pool size is 10 connections (based on num_cpus * 2 + 1). If you accidentally create 8 client instances, that's 80 connections. On a shared PostgreSQL server running multiple projects, you're dead.
The Fix: Singleton Proxy
The fix is surprisingly simple. Instead of letting each import create a new database client, you store a single instance on globalThis — a JavaScript global that survives hot module reloads in development and module-cache guarantees in production:
import { PrismaClient } from '@prisma/client'
const globalForPrisma = globalThis as unknown as {
prisma: PrismaClient | undefined
}
export const prisma = globalForPrisma.prisma ?? new PrismaClient()
if (process.env.NODE_ENV !== 'production') {
globalForPrisma.prisma = prisma
}
In development, the Prisma client is stashed on globalThis, which survives hot module reloads. In production, the module cache ensures a single instance.
I packaged this as createPrismaProxy() in a shared library so every project gets the same pattern without thinking about it. Import prisma from the package, use it everywhere, never worry about pool management.
The Native Driver Adapter
The singleton fixed the connection leak, but I wanted more control over pool behavior. Prisma lets you swap out its built-in query engine for the native pg driver, which gives you direct access to pool configuration:
import { PrismaPg } from '@prisma/adapter-pg'
import { Pool } from 'pg'
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 10, // explicit pool size
})
const adapter = new PrismaPg(pool)
const prisma = new PrismaClient({ adapter })
This gives me explicit control over the connection pool. I can set max, idleTimeoutMillis, and connectionTimeoutMillis directly on the pg Pool instead of relying on Prisma's built-in pool behavior.
Monitoring Connection Count
To catch this early, I added a quick diagnostic query:
SELECT count(*) FROM pg_stat_activity
WHERE datname = 'myapp';
I can run this through the platform's database tools without SSH-ing into the server. If the count is climbing toward the limit, I know something's wrong before users see errors.
Lessons Learned
Always use a singleton for database clients in Next.js. This isn't optional — it's a requirement of the framework's execution model. The Prisma docs mention it, but it's buried in a "best practices" section that most people skip.
Set explicit pool sizes. Don't rely on defaults. If you're running multiple projects on the same PostgreSQL server, divide your total connection limit across projects and set each one's pool accordingly. I use max: 10 per project, which gives me room for 8+ projects on a 100-connection server.
Test under load, not just happy path. The app worked perfectly in manual testing. The pool exhaustion only showed up under real traffic patterns where multiple requests hit server components simultaneously. A simple load test would have caught this before production.
Hot reload is the silent killer. Development mode is where this bites hardest, because HMR creates new module scopes aggressively. If your dev server is slowly consuming connections, the singleton pattern fixes it. If you're ignoring the problem because "it works in production," you're probably just under the threshold — for now.
