ASWRITTEN[AI]
Collective memory for AI-native teams
THE PROBLEM
Your team is already doing context engineering — RAG pipelines, project instructions, documentation wikis. Your AI tools have more organizational context than ever. But they can only access knowledge that's been written down.[1]
The most valuable knowledge in your organization — why your architect made that design decision, how your senior engineers actually think about the codebase, the methodology that makes your best people 10x more effective — was never documented.[2] It lives in their heads. No retrieval system can access it because it doesn't exist as an artifact. When those people are busy, your team waits. When they leave, that knowledge leaves with them.[3]
WHAT WE DO
We extract undocumented knowledge from your senior people through consultant-led interviews guided and deepened by AI[4], and structure it into a versioned knowledge graph that steers AI reasoning — not just informs it.[5] Between sessions, conversational AI extracts knowledge while your staff works, probing for decision rationale and filling gaps identified by the system.
We start with three sessions with your key people. Two to four weeks. Your team gets a working collective memory — organizational context at the top of every AI interaction, grounded in how your experts actually think.
HOW IT WORKS
1. We interview your senior people — CEO, CTO, architect, domain experts. Scarlet leads the interviews; AI guides the conversation to probe for reasoning, methodology, and decision rationale. Conversational AI continues extracting knowledge while your staff works.
2. Extracted knowledge is structured into a narrative architecture — an RDF knowledge graph with conviction levels, domain decomposition, and full provenance. It lives in your git repo, version-controlled alongside your code.[6]
3. Every AI tool on your team compiles the same organizational worldview via MCP. Claude, Claude Code, GitHub Copilot, Codex — tool-agnostic by design.[7] When the worldview evolves, every agent updates.
WHY THIS IS DIFFERENT
Everyone else gives AI more context. We give it direction.[8] The knowledge graph isn't a flat document store. It knows what's settled versus what's in debate, who decided what and why, and where the organization is headed — not just where it's been.
Git-native means your worldview is code — branchable, reviewable, diffable.[9] Build automations that respond when strategic direction shifts. Multiple people collaborate on how your AI thinks through the same workflows your developers already know.
YOUR DATA, YOUR INFRASTRUCTURE
Your collective memory lives in your git repository — not our servers, not a vendor cloud. You own it completely.[10] LLM extraction runs through your own API keys (BYOK) and our ZDR proxy. On-premise deployment available for regulated environments.[11] The architecture separates deterministic operations (commit, compile, assemble) from LLM operations (extract, introspect, generate) — on-prem covers the data path while LLM calls route through your approved providers.
NEXT STEP
A 60-minute call with Scarlet Dame, our founder. She'll show you by doing it — interviewing you about one domain your team knows deeply, and demonstrating what your AI can do with that knowledge.
scarlet@aswritten.ai • aswritten.ai
ABOUT THIS DOCUMENT
This one-sheet was generated from aswritten.ai's own collective memory — the same system described on the preceding page. Every factual claim was annotated against our compiled knowledge graph using the annotate tool, which maps each assertion to a specific concept in the graph, traces it back to the memory where it was first captured, identifies the person who contributed that knowledge, and reports how settled the claim is (its conviction level).
The result: 91% of the claims on the front page trace to specific people, conversations, and decisions — with full provenance. The 9% that don't are either rhetorical transitions or claims not yet committed to memory. Those are gaps we can fill.
When the organizational worldview changes — a new memory is committed, a conviction level shifts, a strategic decision evolves — this document regenerates and the citations update automatically. What you're reading is not a static artifact. It's a compilation target.
CITATIONS & PROVENANCE
- Articulated during competitive positioning work (March 2026), validated across discovery calls with Mike Sackton (enterprise architect), Billy Sylvester (infrastructure advisor), Martin Kess (dev-native team), and Martin Klepsch (AI consultancy). Replaces prior "zero context" framing. Foundation conviction. ↩︎
- Mike Sackton, Chief Architect at Escher Group (150-person enterprise, EU postal services), broke the original pitch premise: "The reality is that we don't [have knowledge locked in AI]. We have a lot of knowledge locked in people's heads." His posture shifted from advisory to purchase intent on this reframe. Foundation conviction. ↩︎
- Cost framing validated across beta program and advisor calls: new hires at 50% productivity for 3–6 months, seniors losing 20% of time to repeat questions, total context loss on departure. Quantified at $1.5M+/year for a 10-person team. Foundation conviction. ↩︎
- Three-moment extraction model formalized through product taxonomy work (Feb–Mar 2026): consultant-led interviews, decision-point extraction during work, and consolidation during review. Between sessions, conversational AI continues extraction without requiring expert time. Foundation conviction. ↩︎
- The architecture gap traces to the founding thesis: emergence theory (2012) → GPT-2 steering vectors (2019) → manual narrative architecture at Vouch.io → automated extraction at aswritten.ai. Scarlet's framing: "Just putting your docs into RAG doesn't make them steer AI effectively." Foundation conviction. ↩︎
- Git-native colocation is a Foundation-level architectural decision. The RDF knowledge graph uses conviction levels (Notion → Stake → Boulder → Foundation) to signal how settled claims are, and seven-domain decomposition to prevent attention dilution. Client owns all data. ↩︎
- Cross-platform validation from Mike Sackton's enterprise beta (February 2026): his team ran GitHub Copilot CLI with GPT5.2 Codex against the same collective memory Claude uses — their tool choice, not a controlled test. Boulder conviction. ↩︎
- Traces to Scarlet Dame's 2019 work with GPT-2: the same model given different narrative frames reasons differently about identical facts. This became the founding thesis and the theoretical basis for the ontology's conviction levels and domain decomposition. Foundation conviction. ↩︎
- A worldview change is a GitHub push event; automations respond via GitHub Actions and MCP. The multiplayer distinction — contribution-side (co-authoring how AI thinks) vs. conversation-side (team chats) — validated by co-founder Tony Maley as a daily pain point. Foundation conviction. ↩︎
- All data lives in the user's own GitHub repository. No backend storage, no vendor cloud. Client owns all memories, transactions, and custom ontology extensions. Validated during enterprise beta onboarding. Foundation conviction. ↩︎
- Enterprise compliance (GDPR, data sovereignty) confirmed as adoption gate by Mike Sackton. The deterministic/LLM split — commit and compile are local; extract and introspect route through approved providers — enables on-prem deployment. Boulder conviction. ↩︎