An admissions OS that runs seven agents around one shared applicant object.
Intake, strategy, essay editing, deep review, framework gate, gap analysis, and school comparison — all reading and writing the same Narrative Anchor. Foreground agents on Haiku, deep work on Sonnet, blended at ~$17 per applicant.
MBA admissions consulting doesn't scale. Generic AI tools produce essays that sound like everyone else's.
A family pays $15K–$25K for admissions consulting. The consultant meets once a week, remembers last session's context by flipping through notes, and can handle maybe 15 clients before quality drops. Meanwhile, generic AI tools write essays that use the same three opening phrases and cite stats from the wrong school — and every applicant looks the same in the pile.
The failures are specific. A strategic theme from week one gets lost by week four. A claim made in the resume doesn't match the claim in the "why MBA" essay, and nobody notices until an admissions committee does. Seven different tools get opened for seven different steps — essay editor, review service, school-fit calculator — none of which share context. The applicant ends up stitching the story together on their own, badly.
The root cause isn't the consultant's skill or the AI's IQ. It's that there's no shared object that travels with the applicant across every step — no single source for themes, claims, voice fingerprint, and constraints that every agent reads and writes. Without that, each step is a fresh re-brief, and contradictions become inevitable.
Admissions OS exists to make that shared object the spine of the workflow. Seven specialized agents, each doing one job well, all reading and writing the same Narrative Anchor — so a claim made on agent 2 is citable on agent 7, and contradictions get flagged before a committee sees them.
Seven agents. One shared object. Tier-based routing keeps cost predictable.
The Narrative Anchor is the source of truth. Not a file, not a doc.
Every agent reads the NAO before it acts and writes back when it finishes. Themes, claims, voice fingerprint, applicant constraints — all versioned per step. The contradiction you'd normally catch at final submission gets flagged at agent 3, because the same object is in scope every time.
Foreground agents run on Haiku. Deep work runs on Sonnet.
The essay editor fires on every keystroke and must stay under 300ms — Haiku + prompt caching territory. Strategy and deep review run multi-minute with 4-dim scoring — Sonnet handles that. Each agent knows its tier, so the per-applicant cost stays predictable instead of spiralling with usage.
Gates warn. They don't block.
The framework gate soft-warns when a claim lacks NAO evidence — it doesn't stop the applicant. Guidance over control. An advisor that blocks the user at every edit gets turned off; one that flags and explains is one that ships. "This claim isn't in your NAO — want to add evidence or rephrase?"
UI first. Agents plug in behind it.
The prototype ships with simulated deterministic brains so the latency, pipeline orchestration, cost telemetry, and NAO schema are production-accurate before a single LLM call is wired up. When the simulated brains get replaced with real Anthropic calls, the UI layer doesn't change — and the ~$17 per-applicant economics stay achievable, not aspirational.
Three things change once the NAO is the spine.
Specialized, context-sharing steps
Intake, strategy, essay editing, deep review, framework gate, gap analysis, school fit — each reading and writing the same NAO. No context loss between tools.
Blended AI cost with prompt caching
Tier-based routing keeps foreground interactions cheap and deep analysis affordable. Compare to $15K–$25K for a single consultant, or generic AI tools that still need human cleanup.
From raw intake to v1 Narrative Anchor
Strategy agent turnaround. Deep-review 4-dim scoring completes in parallel under two minutes. Applicants stay in flow instead of waiting on human reviewers.
Numbers observed in Brilworks' Admissions OS prototype. Actual figures on your stack will depend on applicant volume, essay count per school, and prompt-cache hit rate.
Honest fit criteria. We'd rather say no than oversell.
✓Strong fit if
- You run an MBA, grad-school, or professional-program admissions practice with 50+ applicants per cycle
- Consultants already follow a multi-step process, but context gets lost between sessions and tools
- The applicant's voice, themes, and claims must stay consistent across 3–8 essays per school
- You're willing to model a shared NAO schema first and plug the agents in behind it
✗Not a fit if
- You serve fewer than 20 applicants a year — the orchestration overhead isn't earning its keep
- Your consulting process isn't written down yet — start with positioning and curriculum, not orchestration
- You want a one-shot essay generator, not a multi-agent workflow with a shared object
- You're not willing to run two model tiers (Haiku + Sonnet) in the same pipeline
Book a 30-minute scoping call.
We'll walk through your current admissions process, map it against the 7-agent NAO pattern, and tell you honestly whether it fits your practice — and what it would take to ship.