Live in Production2024-12 to OngoingSolo

AlchemizeCV

Job-search workflow platform that turns a master profile into tailored resumes, grounded project bullets, and tracked applications.

9

Onboarding Steps

4

Pipeline Phases

3

Job-Hunt Surfaces

2

Provider Paths

React + Vike4-Phase PipelineWebSocketGitHub ContextBYOKApplication Runs

Product Screenshots

AlchemizeCV marketing hero promising role-ready resumes from one master profile.

AlchemizeCV marketing hero promising role-ready resumes from one master profile.

Authenticated AlchemizeCV profile builder with completion progress and section ordering.

Authenticated AlchemizeCV profile builder with completion progress and section ordering.

Authenticated recon discover screen for pending discoveries and recon runs.

Authenticated recon discover screen for pending discoveries and recon runs.

Authenticated API settings screen showing BYOK provider and model configuration.

Authenticated API settings screen showing BYOK provider and model configuration.

Overview

I built AlchemizeCV because resume tailoring is only one piece of a bigger problem: people need a reusable profile, grounded project context, clear privacy boundaries, and a workflow that survives dozens of applications. The product starts with onboarding and profile building, layers in GitHub-backed project analysis, and then turns that context into role-specific resumes, cover letters, and application artifacts through a replayable four-phase pipeline.

Under the hood the product is intentionally polyglot. The web app is a React 18 + Vike thin client, FastAPI owns the workflow and persistence, a Go service turns repositories into structured project context through tree-sitter analysis, and the UI streams generation progress over WebSocket so long-running jobs still feel inspectable instead of opaque.

TL;DR

Built a job-search workflow platform that turns one master profile into tailored resumes, grounded project bullets, and tracked application runs through a replayable four-phase generation pipeline.

Highlights

  • Nine-step onboarding walks users through BYOK, resume import, profile setup, and first-use activation.
  • GitHub imports create semantic digests and a token-aware Context Editor before bullet generation.
  • Job detail pages expose phase progress, prompt editing, LLM call traces, and partial reruns.
  • The product extends beyond resume generation into recon discovery, job tracking, and application runs.

Problem

Resume tailoring kept losing context between profile, projects, and live applications.

Tailoring a resume for every role is already expensive, but the deeper issue is context drift. Experience lives in one place, project evidence lives in repositories, settings live somewhere else, and every application adds more manual copy, prompt tweaking, and second-guessing.

I wanted a product that treats the whole job-search loop as a system: one master profile, grounded project context, replayable generation runs, and enough observability that I could improve the output without guessing which prompt or phase actually failed.

Solution

I turned one-off prompting into a durable job-search workflow.

AlchemizeCV combines guided onboarding, a structured profile builder, GitHub-backed project imports, and a four-phase generation pipeline so users can move from profile setup to role-specific artifacts without rebuilding context each time. The frontend stays thin and task-focused while the backend persists runs, prompt choices, and intermediate artifacts so I can replay or partially rerun work instead of starting over.

That workflow keeps privacy and control explicit. BYOK settings let users choose Gemini directly or route through OpenRouter, WebSocket progress keeps long runs understandable, and the broader job-hunt surfaces like recon discovery and application runs make the product useful beyond the first generated PDF.

Workflow

Onboard -> Ground -> Generate -> Apply

The product flow starts with profile onboarding and BYOK setup, grounds project evidence through GitHub imports and semantic context, generates artifacts through a replayable four-phase pipeline, and continues into cover letters and application runs.

Lifecycle

PENDINGGENERATINGGENERATEDRENDERINGCOMPLETEFAILED

Key Endpoints

POST

/api/jobs/extract

Extract job requirements from a pasted URL or posting

POST

/api/jobs/:label/async

Start an async generation run for a job

GET

/api/jobs/:label/generate/status

Fetch current generation and rendering state

POST

/api/jobs/:label/render

Render the current bundle to PDF

POST

/api/projects/import/github

Import repositories and generate semantic project context

POST

/api/discover

Accept browser-extension discoveries into the review queue

Architecture

Polyglot workflow platform with a React 18 + Vike web app, FastAPI feature slices for profile/jobs/settings/applications, a Go tree-sitter analysis service for GitHub project context, and PostgreSQL-backed persistence for users, jobs, runs, and artifacts.

Ingress

  • Caddy reverse proxy
  • HTTPS/TLS
  • Cookie + token boundaries

Web App

  • React 18
  • Vike routing
  • TypeScript UI + live preview

Resume API

  • FastAPI feature slices
  • WebSocket progress
  • Profile / jobs / settings flows

Generation Pipeline

  • Raw extraction
  • Synthesis
  • Context-aware pruning
  • Assembly + rendering

Code Context

  • Go portfolio service
  • tree-sitter parser pool
  • Semantic digest artifacts

Data

  • PostgreSQL 16
  • Run lineage + prompts
  • Rendered artifacts + settings

Product Surfaces

8 Shipped Capabilities

Guided Onboarding + Profile Builder

core

Nine-step onboarding, structured profile editing, and live completion guidance replace blank-state prompting.

GitHub Context Editor

integration

Repository imports generate semantic digests and let users curate exactly which code facts the model can see.

Replayable 4-Phase Pipeline

core

Runs persist intermediate artifacts so prompt changes and partial reruns stay inspectable.

Live Generation Observability

dx

WebSocket progress, phase visualization, and LLM call traces keep long-running jobs debuggable.

BYOK Provider Settings

security

Users configure Gemini or OpenRouter-backed models without relying on shared server keys.

Cover Letters + Application Runs

core

Generated job artifacts feed into tracked application runs instead of stopping at the PDF.

Recon Discovery Queue

integration

Extension-driven discovery surfaces pending jobs, recon runs, and events for later review.

Browser-Pool Rendering

performance

Semaphore-limited Playwright rendering keeps PDF generation fast without unbounded browser churn.

Tech Stack

Web App

React 18

Authenticated product UI and interactive editing surfaces

Vike

Route structure and SSR/client rendering split

TypeScript

Typed APIs, feature state, and UI logic

Tailwind CSS

Shared design primitives and responsive styling

Backend

Python 3.13

Workflow orchestration and feature-slice backend code

FastAPI

REST endpoints, auth surfaces, and WebSocket progress

SQLAlchemy Async

Async persistence for profiles, jobs, and runs

Playwright

HTML-to-PDF rendering and browser pool execution

Code Analysis

Go 1.25

Repository parsing and structured project context generation

tree-sitter

Incremental AST parsing across imported repositories

Parser pooling

Reuse language parsers across large repository imports

Integrations

OpenRouter

Model catalog access and multi-provider routing

Google Gemini

Direct provider path for BYOK users

GitHub App

Repository import and curated code evidence flows

Firefox Extension

Recon capture and guarded automation handoff

Tradeoffs & Decisions

Why keep the generation pipeline split into four phases?

I wanted better output quality and better debugging. Separating extraction, synthesis, pruning, and assembly makes each prompt easier to tune and lets me persist intermediate artifacts for replay or inspection.

Alternatives:Single end-to-end promptTemplate-only generationClient-only prompting

Why build a token-aware Context Editor for project imports?

Project bullets are stronger when they are grounded in real code, but raw repository context is too large and too noisy. Semantic digests plus a user-facing context editor give me controllable evidence instead of blind prompt stuffing.

Alternatives:Direct README ingestionUncurated full-repo contextManual project entry only

Why use a React + Vike thin client over a heavier frontend architecture?

The product already has a complex backend workflow, so I kept the web app focused on task-specific UI, typed API access, and live status instead of duplicating domain logic in the browser.

Alternatives:Fully stateful SPA architectureServer-rendered forms onlyDesktop app

Why isolate rendering behind a shared browser pool?

PDF rendering has real cost. A semaphore-limited Playwright browser pool gives good throughput and warm performance without spinning up an uncontrolled number of Chromium instances under concurrent demand.

Alternatives:Spawn a browser per renderThird-party PDF APIClient-side rendering only

Challenges

LLM context windows collapse when profile, projects, and job context all compete for tokens

I split generation into phases, produced canonical semantic digests for projects, and let users curate context before the pruner applies its evolving-context snowball across the final selection pass.

Generation runs need to stay debuggable after prompts or settings change

I persist run lineage, prompt choices, and intermediate artifacts in the database so I can partially rerun from the affected phase instead of forcing a full restart from scratch.

PDF rendering can become a bottleneck under concurrent demand

I use a semaphore-limited shared browser pool with isolated contexts so rendering stays fast and bounded instead of spawning uncontrolled browser processes.

Users need privacy and provider control without turning the app into a billing proxy

I kept the provider model BYOK-first, added direct Gemini and OpenRouter paths, and separated web auth, WebSocket auth, and extension-token auth so each surface has a clear trust boundary.

Outcomes

  • I shipped one workflow that starts with onboarding and profile building instead of dropping users into a blank prompt box
  • I ground project bullets in GitHub-derived semantic digests and user-curated code context rather than marketing copy
  • I persist generation runs, prompt choices, and intermediate artifacts so reruns and debugging stay replayable
  • I expose live generation state through WebSocket progress, phase visualization, and LLM call detail instead of opaque loading
  • I expanded the product beyond resumes into recon discovery, job tracking, cover letters, and application run review
  • I kept privacy explicit with BYOK provider settings so the product does not depend on shared server keys

Explore the code

This case study focuses on the onboarding-to-application workflow, the replayable generation pipeline, the GitHub context model, and the polyglot services behind AlchemizeCV.