Back to Work
Recruiting7 days

A Research Portal That Replaced Spreadsheets in 7 Days

Client: DahliaTEQ

The Challenge

DahliaTEQ is a three-person boutique recruiting firm specializing in executive search. They source candidates from think tanks, consulting firms, and academic institutions — tracking hundreds of professionals across multiple sources with complex scoring criteria.

Their process was spreadsheets, manual web research, and scattered notes. It worked for a while, but it didn't scale. They needed a centralized database with scoring, filtering, and the ability to track candidates across multiple job openings.

Off-the-shelf applicant tracking systems weren't built for this. Those tools handle inbound applicants. DahliaTEQ does proactive research — finding people who aren't applying anywhere. Traditional custom development quotes came back at $40-80K with a 3-6 month timeline. For a three-person firm, that's not realistic.

Our Approach

DayWhat Got Built
1-2Core application — Next.js project with auth, PostgreSQL schema, candidate management UI, project and job structure
3-4Scoring and filtering — multi-criteria scoring system, tier classification (A/B/C/D), advanced filters by source, role, tier, and location
5-6Data pipeline — scrapers for 9 different sources (CFR, Asia Society, Kroll, and others), image scraping with S3 storage, import utilities
7Polish, deploy to production on HostKit, client walkthrough and handoff

The scrapers were the interesting part. Each source has a different structure, different data quality, different image handling. We built Playwright-based scrapers that could pull candidate profiles, normalize the data, and score them against active job requirements — all feeding into one clean interface.

The Outcome

What the client got on day 7:

  • 609 scored candidates from 9 integrated sources
  • 1,827 professionals in a centralized talent pool
  • 5 active job requisitions configured with scoring criteria
  • Profile images scraped and hosted on S3
  • Advanced filtering — by source, tier, role, location
  • Notes system for collaborative candidate evaluation
  • AI-powered scoring with written reasoning for each candidate-job match
  • Role-based access so the whole team could work simultaneously
MetricValue
Development time~7 days
Scored candidates609
Total talent pool1,827
Sources integrated9
Codebase~7,500 lines

Tech stack: Next.js 15, React, Tailwind CSS, PostgreSQL, Prisma, HostKit, MinIO (S3), Playwright, Claude Code.

Why This Worked

  • The right tool for the job. No ATS on the market supports proactive research workflows. Custom was the only option — the question was how fast and how cheap.
  • Scrapers compound value. Once the data pipeline existed, the client could re-run imports as new candidates appeared across their 9 sources. The tool gets more valuable over time.
  • Small team, simple auth. Three users meant we could focus on the workflow instead of enterprise access controls. Right-sized architecture for a right-sized team.
  • Ship the whole thing. Database, UI, scrapers, scoring, images, auth, deployment — all in one sprint. No phased rollout, no "MVP first." They got the complete tool on day 7.

Want us to build something like this for you?

We ship production software in days, not months. Tell us what you need — our AI receptionist is standing by.

Back to Work
Page

Page

Client AI · Online

Page

Hey, I'm Page.

Tell me what you need. I'll point you to the right person — or tell you if we're not the right fit.

Powered by Claude · Responses may vary