I had four weddings of my own. Three and four happened back-to-back in Asia — ceremonies for family in different countries. Between the events, I had a lot of downtime in transit. Long drives, slow mornings, hours where the only thing to do was think.
So I thought about what I actually want to do with my career. Not the interview version. The real version. I spent a month in structured conversations with Claude, and by the time I came home I had exactly one role I wanted: Product Manager, Claude Code at Anthropic.
This post isn’t the application. It’s the system I built to produce it.
The pipeline, end to end:
- Layer 1 — Foundation. A 630-line Master Career Narrative: every role, every project, every bullet, normalized for downstream consumption.
- Layer 2 — Strategy. A Role Alignment Framework that classifies companies into archetypes and scores roles into tiers before I look at a JD.
- Layer 3 — Targeting. A multi-model research pipeline (Claude + ChatGPT + Gemini) that identified target companies through structured cross-model debate.
- Layer 4 — Discovery. A job scanner running on a Raspberry Pi that monitors 190+ companies every 8 hours and delivers matches via Telegram.
- Layer 5 — Evaluation. A Claude session that auto-researches each company, runs the full framework, and produces a structured verdict.
- Layer 6 — Execution. A 3-file resume skill that tailors content per JD, plus a project system prompt for cover letters and essays.
Every layer built with Claude. Every decision traceable. Glass box, not black box.
The Principle: More Turns, Not Less Effort
I use AI to lower the cost of each iteration. Then I spend the savings on more iterations, not less effort. Where someone else takes 2-3 passes on a resume, I take 20+. Nothing is fully automated. I’m responsible for every final decision. But the cost of getting to each decision point is radically lower, so I can make more of them.
And everything is transparent. Every automation is inspectable. Every decision is traceable. You can follow the entire pipeline from input to output and see exactly why each choice was made. I’m responsible for maybe 5% of the raw effort, but 100% of it is visible. I built my system the way I believe AI tools should be built: surface the reasoning, don’t hide it.
Layer 1: The Foundation
One document feeds everything downstream.
Markdown · Claude Projects (claude.ai) · ~630 lines, 6 sections
My Master Career Narrative is ~630 lines of Markdown covering my entire professional life. Every experience has format-specific versions — resume, LinkedIn, website — with explicit guidance on what to say, what not to say, and when to use each. The resume-ready bullets follow an impact-first format: outcome, then method, then evidence.
The narrative is the database. The applications are queries against it.
Layer 2: The Strategy
A job title tells you almost nothing about the actual role.
Markdown · Claude Projects · 4 company archetypes × 4 role tiers × kill switch
Two documents work together. The Core Function Alignment Framework classifies companies into archetypes (research-led, product-led, sales-led, deployment-led) and scores where a given role sits within that archetype — Core, Core-via-constraint, Core-adjacent, or Support. The Next Role Strategy tracks four specific gaps I’m closing and runs a kill switch: four forcing questions that must pass before I invest more time.
The canonical example: when Anthropic posted a TPM role alongside the PM role, my framework scored it immediately. The company is Tier 1. The TPM role is Tier 3. Don’t confuse the two.
Layer 3: The Targeting
Three models debating each other to find the right companies.
Claude · ChatGPT · Gemini · structured extraction + cross-model debate
I wrote research prompts and ran them across Claude, ChatGPT, and Gemini simultaneously. An extraction bot pulled structured data from each model’s output — same schema, three sources. An orchestration bot identified every point of divergence, framed each as an open question, and fed them back for another round. Structured debate with me as moderator.
The automation handles breadth. The judgment stays with me.
Layer 4: The Discovery
A Raspberry Pi scanning 190+ companies while I sleep.
Python · OpenClaw · Gemini · Telegram Bot API · Raspberry Pi · Lever, Greenhouse, Ashby, YC APIs
Every eight hours, a Python scanner hits four ATS APIs (Lever, Greenhouse, Ashby, YC), deduplicates against everything it’s ever seen, and feeds new listings to Gemini for relevance filtering. The filtering criteria were distilled from my career narrative and role alignment framework into a natural-language prompt the model reads on every scan. False positives cost me a glance. False negatives cost me an opportunity.
The scanner is a framework, not a personal tool. All personalization lives in three plain-text config files. Swap one markdown file to change the targeting entirely.
Layer 5: The Evaluation
Automatic web research before the framework even runs.
Claude.ai Project · web_search + web_fetch · 4 persistent knowledge docs
A Claude.ai Project with four strategy documents loaded as persistent knowledge. When I paste a JD, it fetches the company’s current stage, funding, headcount, and recent news before making any judgment. Then it runs the full framework: archetype classification, tier assignment, gap closure scores, kill switch. The output is a fixed schema — verdict (Take the call / Explore with caution / Probably pass / Hard pass), tier, scores, and tagged diligence questions. I can read just the top block and know whether the full write-up is worth my time.
Layer 6: The Execution
A decision engine, not a template.
Claude Skill (SKILL.md) · Node.js docx library · LibreOffice headless · pdfinfo
The resume skill is three files. SKILL.md has the logic: three tone profiles (BUILDER / PRODUCT / OPERATOR), bullet selection rules, language mirroring. bullets.md is every resume-usable bullet, pre-extracted with impact-first restructuring. formatting.md is the visual spec down to DXA margins. When I paste a JD, it selects a tone profile, picks which engagements to feature, reorders sections, writes a Node.js script to generate the .docx, and validates it fits on one page.
For Anthropic, it put Ontos before the GenAI Platform — CLI tool applying for a CLI tool PM role. It swapped my weakest bullet for the Ada build. Encoded logic, not random choices.
Cover letters and essays aren’t a separate skill — they’re handled by the project system prompt. Resumes benefit from systematic rules. Essays benefit from iteration. The “Why Anthropic?” essay went through 10+ versions across three Claude sessions.
The Proof Point: Project Ada
119 scripts. 1.57 million records. Two weeks. One person.
Python 3.12 · pandas · Cursor with Claude · Anthropic API · React/Vite · 282 commits in 13 days
At McKinsey, I solo-built an enterprise analytical platform. 114 interactive dashboards, 63 data packages, 177 analytical outputs total. Built with Cursor and Claude in 13 calendar days, now generating content for CIO steering committee presentations.
The classification engine is deliberately rule-based — keyword matching, not LLM inference. Claude handles only the 280 ambiguous edge cases, and a human reviews every one. Twelve codified rules govern the workspace: data lineage on every output, auto-committed artifacts, routed files. The agent operates within a system designed to make it predictable.
The Meta Layer
The process is the proof.
Claude Code (Opus 4.6) · Claude.ai web sessions · cross-session human routing
The “Why Anthropic?” essay was drafted across three Claude sessions — one with codebase access that scanned 23 repos, one with my career narrative for structural drafts, one for voice and argument. I sat at the center routing outputs between them, moderating disagreements, making the final call on every sentence. No automated handoffs. No single session had the full picture.
The system used to produce this blog post — multi-session orchestration, structured contributor prompts, fact-check gates, human review at every merge — is itself an instance of the system the blog describes. A drafting session wrote prose. A contributor system asked each session to fact-check its own section. An orchestrator assembled the final version. Every claim verified against the session that produced it. The process is the proof.
The architecture scales beyond me. The scanner is a framework — swap three config files. The resume skill is a template for building other skills. The evaluation session is a project setup anyone can replicate.
This post was written with Claude. The system it describes was built with Claude. The system is the argument. If you want to see the code, the repos are on GitHub. If you want to talk about the approach, I’m on LinkedIn.