Infrastructure · Wiki operator

An infrastructure agent that keeps your wiki healthy — and never acts without asking.

Daily compile, weekly lint, weekly concept auto-generation. On-demand tool evaluations with a fixed report schema. Every side-effect action gates on human approval first.

350
Wiki files continuously maintained
57
Broken links fixed last month
0
Silent actions taken without approval
The Challenge

Wikis rot quietly. Tool evaluations happen inconsistently. Nobody owns either problem.

A team wiki starts clean. Six months in, every third link is a 404. Concepts that get mentioned in ten articles don't have their own pages. A new framework launches and the engineering team spends four hours testing it only to end up with "seems good?" as the verdict.

The failures compound. The KB becomes a junkyard nobody trusts, so nobody updates it, so it rots faster. Tool evaluations get written in three different formats by three different engineers and become impossible to compare. A senior engineer becomes the bottleneck for both problems and quietly starts saying no.

The root cause isn't the team. It's that there's no forced cadence for maintenance and no forced structure for evaluations. Both jobs feel important, but neither is urgent on any given day — so they both get put off indefinitely.

This agent gives both jobs a schedule and a schema: three cron jobs for the KB, one fixed report template for evaluations, and every side-effect action approved before it runs.

How the agent handles it

Three scheduled jobs. One report template. Zero silent actions.

DAILY 9 AM Wiki Compile Loop compile.py Scan raw/, reindex QMD Flag gaps & contradictions SUNDAY 10 AM Wiki Lint Pass lint.py --report health-report.md Top 3 actions MONDAY 11 AM Auto-Generate Concepts Read meta/gaps.md Pick top 3 mentioned Generate stubs WIKI / (SINGLE SOURCE OF TRUTH) 350 files concepts, articles, metadata 31 concept articles auto-generated this month 93 broken links (down from 150+) master-index.md searchable via QMD meta/gaps.md most-mentioned topics meta/health-report.md weekly status + actions FEEDS: research profile (routing maps) · marketing validator (claim sources)
1

The wiki is kept alive by three scheduled jobs.

Daily compile ingests new markdown and reindexes QMD. Weekly lint writes a health-report.md with top-three recommended actions. Weekly concept generation reads gaps.md, picks the three most-mentioned-but-missing concepts, and drafts stubs. The KB improves whether or not anyone remembered to check.

2

Tool evaluations follow a fixed schema, not a vibe.

Every report uses the same template: TOOL, INSTALL, STATUS, TESTED, VERDICT, GOTCHAS. Two evaluations you didn't write are still comparable at a glance. Decisions get made on structured facts, not on how confidently a Slack message happened to be worded.

3

Every side-effect action waits for a human to say yes.

Before installing, testing, or modifying anything outside the sandbox, the agent describes the plan and the timeline — then halts. "I'll brew install AgentLens, init an example project, and compare traces. OK to proceed?" Nothing runs on silent.

4

The audit trail is automatic, not a policy.

Terminal commands logged. Browser actions recorded. Every test reproducible. If a tool evaluation reached the wrong conclusion, you can replay the exact commands the agent ran — no detective work, no "I think I did it this way".

What you get

Three things change once lab is running.

~30min

Per structured tool evaluation

From "vet this framework" to a report the team can make a decision on — including install, test, and verdict with gotchas.

31concepts

Auto-generated last month

Gap-driven concept stubs fill the most-referenced-but-missing articles in the wiki. No new KB debt piling up.

~150→ 93

Broken links trending down

Weekly lint surfaces and repairs dead references. The wiki gets measurably healthier week over week instead of silently rotting.

Numbers observed in Brilworks' internal reference deployment. Actual figures on your stack will depend on wiki size, tool volume, and how strictly you want the approval gates enforced.

Is this right for you?

Honest fit criteria. We'd rather say no than oversell.

Strong fit if

  • You maintain an internal wiki (BookStack, Outline, MediaWiki, or similar) with 100+ articles
  • Broken links and missing concept pages frustrate your team on a regular basis
  • You evaluate new tools or frameworks often enough to want a structured report format
  • You want infrastructure maintenance on a schedule rather than a tap-on-the-shoulder

Not a fit if

  • You have no internal wiki, or a wiki with fewer than 20 articles
  • You never evaluate new tools — generic automation will be overkill
  • You don't have terminal or admin access to the wiki's host
  • You want an agent that silently installs and modifies things without asking

Book a 30-minute scoping call.

We'll walk through your wiki, your current tool-evaluation pain, and map it against the three-cron pattern — then tell you honestly whether it fits.