An X/Twitter agent that turns tweets into KB articles — and never auto-posts.
Bulk-ingest tweet URLs into structured markdown articles with engagement stats, media, and related concepts. A daily curation cron drafts breaking-news tweets — but every post waits for a human to approve it.
Founders want a social presence. They don't want to babysit a social pipeline.
A founder opens Slack, sees a tweet worth saving, drops it into #research. A week later, the tweet is buried three hundred messages deep. Next month the sales team asks for "that thing we saw about the new AI launch" and nobody can find it.
Meanwhile the founder is spending thirty minutes a day curating news, writing hot takes, and posting to X by hand. Scripted automations hit rate limits and get shadowbanned. Generic AI tools happily auto-post something embarrassing at 2am. Tweets that could be searchable research artifacts live and die in Slack.
The root cause isn't the founder's discipline. It's that there's no pipeline between "interesting tweet" and "searchable KB article", and no safe path between "draft tweet" and "posted tweet". Without that middle layer, you pick between burnout and embarrassment.
This agent adds the middle layer: sequential rate-limit-safe ingestion, a fixed markdown schema so downstream agents can parse reliably, and a human approval gate in front of every published tweet.
Two modes: bulk ingestion into the KB, and daily curation with human sign-off.
Bulk intake runs sequentially, not in parallel.
Drop 2–19 tweet URLs, walk away, come back to KB articles. One tweet at a time, with exponential backoff and a Nitter fallback when bird CLI hits rate limits. Predictable 2–5 minutes for a batch of ten, with zero ban risk.
Every tweet becomes a reusable research artifact.
Fixed markdown sections — Tweet, Engagement, Author, Body, Media, Related Concepts. Downstream agents (research, marketing validator) can parse tweets the same way they parse any other KB file. The tweet stops being a thing you have to remember where you saw.
The curation cron dedupes before it drafts.
Daily poll of bookmarks, likes, GitHub trending, HN, ProductHunt. "Have we already written about this repo?" — embeddings catch semantic dupes even with different URLs. The same AI launch doesn't get tweeted three times.
A human approves every public post. Every time.
Drafts land in Slack with context and approval buttons. Humans approve or reject. Only then does bird CLI fire. This is what keeps a CEO account from tweeting something regrettable at 2am — "never auto-post" is a hard rule, not a preference.
Three things change on day one.
Semantic dedup accuracy
Embeddings-based matching catches duplicates across different URLs (same repo, same AI launch). Prevents spammy-looking feeds.
Tweets published without human sign-off
Across every reference deployment. The approval gate is enforced structurally — there's no flag to disable it.
Searchable tweet corpus over 6 months
Projected from current intake rates. Every tweet the team processes becomes a citable artifact in downstream research and marketing.
Numbers observed in Brilworks' internal reference deployment. Actual figures on your stack will depend on intake volume, follower profile, and how aggressively you run the curation cron.
Honest fit criteria. We'd rather say no than oversell.
✓Strong fit if
- You run a founder or CEO account with 1K+ followers that needs to stay fresh without burnout
- Your team is curating AI or industry news regularly and losing the good tweets to Slack
- You need a searchable corpus of competitor and trend intel for sales and marketing briefings
- You want curation automated but every public post human-approved
✗Not a fit if
- You want full autopilot posting with no review — the approval gate is non-negotiable here
- Your account has under 500 followers and curation isn't a bottleneck yet
- You need real-time streaming intake — this is batch and daily, not second-by-second
- You have no Slack workspace and no way to route drafts to humans
Book a 30-minute scoping call.
We'll walk through your current social and research flow, map it against the two-mode intake pattern, and tell you honestly whether it fits — and what it would take to ship.