Back to Blog
processexpert-model

The First 48 Hours: How We Build Your Expert

Ryan C·

When a new client starts with Emergent, the most common question is "how long until we're up and running?" The answer is usually 48 hours or less. Not 48 hours of meetings and planning documents — 48 hours from kickoff to a working AI expert that can ship real changes to your project.

Here's what actually happens in those two days.

Hour 0–2: Discovery

Every engagement starts with a conversation. Not a sales call — a technical discovery session. We need to understand:

  • What you're building — the product, the users, the business model
  • Where things stand — existing codebase, tech stack, deployment setup, pain points
  • What you need — the immediate priorities and the longer-term goals
  • How you work — your conventions, your review process, your tolerance for autonomy

This is usually a 30–60 minute call, sometimes supplemented by async messages. The goal isn't to produce a statement of work — it's to build enough understanding to start the expert setup.

Hour 2–8: Codebase Immersion

This is where the real work begins. We read your codebase — not skim it, read it. Every route, every model, every integration. We're building a mental model of:

  • Architecture — how the pieces fit together, where data flows, what the boundaries are
  • Patterns — naming conventions, error handling, component structure, test approaches
  • Decisions — why things are the way they are (framework choices, library selections, workarounds)
  • Debt — where the pain points are, what's fragile, what's overdue for refactoring

This immersion phase is critical. The quality of the expert is directly proportional to how well we understand the project. There are no shortcuts here.

Hour 8–16: Context Engineering

With the codebase understood, we build the expert's context layer. This includes:

  • Project identity files — structured documents that give the expert its operating instructions: what the project is, how it works, what conventions to follow
  • Architecture maps — which files own which responsibilities, how modules relate, where the integration points are
  • Decision records — the "why" behind technical choices, so the expert makes recommendations consistent with existing decisions
  • Convention guides — the patterns the expert should follow for naming, file organization, error handling, and code style

This isn't a dump of documentation. It's a carefully structured context package designed to give the expert the right information at the right time.

Hour 16–24: Tool Integration

A useful expert doesn't just reason about code — it interacts with the actual system. During this phase, we connect the expert to:

  • The codebase — read files, search code, understand the current state of any file
  • The database — inspect schemas, run queries, understand data relationships
  • The deployment pipeline — build, deploy, check health, roll back if needed
  • External services — whatever the project integrates with (payment processors, email services, storage, etc.)

Each tool connection is tested against real scenarios. The expert should be able to deploy a change, verify it's healthy, and roll back if something goes wrong — without human intervention for routine operations.

Hour 24–36: Validation

Now we test the expert against real work. We take actual tasks from the project backlog — bug fixes, small features, configuration changes — and run them through the expert. We're checking:

  • Does the expert follow the project's patterns? — code style, naming, file organization
  • Does it make correct architectural decisions? — putting logic in the right places, respecting boundaries
  • Does it handle edge cases? — error states, validation, security considerations
  • Does it deploy cleanly? — builds succeed, tests pass, health checks green

Any gaps in context or tool access get addressed immediately. This is iterative — each test task reveals what the expert knows well and where it needs refinement.

Hour 36–48: Handoff

The expert is ready for production use. We deliver:

  • A working expert that can take tasks, write code, and ship changes
  • A context package that documents everything the expert knows and how it's structured
  • A capabilities brief — what the expert can do autonomously, what requires review, and where the boundaries are

From this point forward, the expert is a working member of the project. It handles routine development tasks at machine speed, escalates decisions that need human judgment, and gets sharper with every interaction as its context expands.

Why 48 Hours, Not 48 Days

Traditional agency onboarding takes weeks because humans need time to ramp up. Reading code, attending meetings, building tribal knowledge — it's all serial and slow.

Our process is fast because the expert setup is systematic, not ad-hoc. We've built this same expert architecture across dozens of projects. The patterns are proven. The tooling is mature. The only variable is the project itself — and two days of focused immersion is enough to understand most codebases deeply.

The result: you go from "we need help" to "the expert just shipped its first feature" in the time it takes a traditional agency to schedule a kickoff meeting.

Ready for a dedicated AI expert?

Every project gets its own expert — purpose-built for your codebase, your workflows, your goals. Tell us what you're working on.

Back to Blog

Emergent

What caught your eye?

Powered by Claude · Responses may vary