THE MANIFESTO
The gutt Manifesto
What is context? Simple question, but the answer runs deep.
Context is what gives meaning to information. It answers who, what, when, where, why, and under what circumstances. Without it, you get misinterpretation, lost knowledge, and repeated mistakes. Experts beat novices not because they have more data—they carry richer context that lets them read signals others miss.
That observation opened a door.
The Nature of Knowledge Work
Most intellectual labor is gathering context about the people and organization you work with. The longer you're in, the less effort everything requires. Not because tasks get simpler, but because you stop burning energy figuring out the surrounding landscape for every decision.
This compounds. Deep organizational context plus general expertise creates senior people. Add the ability to coordinate others, and you get leadership. The org chart maps surprisingly well onto accumulated context.
Here's where AI enters the picture. Large language models solved the expertise problem—they have near-infinite general knowledge. But they have zero organizational memory. Every conversation starts fresh. They are perpetual new hires with perfect skills and no understanding of how things actually work here.
The Formula
Expertise travels with you. Context stays behind. It is trapped in one organization, built over months or years through slow absorption.
A senior engineer isn't just technically sharper. She knows why the codebase has that strange legacy module, who actually understands the billing edge cases, what leadership cares about behind the stated priorities, and which meetings matter versus which are performance. None of that is expertise. All of it is context.
Performance = Expertise × Context
This explains why onboarding bleeds money. Six to twelve months for a senior hire to hit full speed—that isn't skill ramp. They were already skilled. It is context acquisition. It's every "let me give you some background" conversation, every Slack thread someone has to read, and every mistake made because nobody told them the unwritten rules.
Current AI can't touch this. Ask ChatGPT how to write a Python function—instant answer. Ask it what to build given your technical debt, team tensions, Q3 priorities, and three prior failed attempts at the same thing—nothing. It lacks the connective tissue.
Data Is Not Context
Here is where companies go wrong when trying to make AI "know" their organization.
They feed it documents. Wikis. Databases. They build retrieval systems that fetch relevant chunks when questions come in. RAG—Retrieval Augmented Generation. Better than nothing, but fundamentally limited.
A document is a snapshot. It captures what someone thought was worth writing down, at one moment, for one audience. It doesn't hold why decisions were made, what alternatives got rejected, who disagreed and why they eventually came around, what shifted since it was written, or whether anyone still believes it's accurate.
Hand AI a pile of documents and you are giving it fragments without the threads that connect them. It is like handing someone a shoebox of photographs and asking them to understand your family. They can describe what they see, but they will never grasp the relationships, the tensions, the history, or the running jokes.
This is why AI hallucinates in organizational settings. Hallucination isn't random noise—it is gap-filling. Ask a question requiring context the model doesn't possess, and it constructs something plausible-sounding from patterns in the available data. Less context means more gaps. More gaps means more fabrication.
The industry treats this as a retrieval problem: better search, fewer hallucinations. But it is a context problem. The AI can't know what it doesn't know. It can't distinguish current policy from outdated draft. It can't recognize that two documents contradict each other because circumstances evolved. It doesn't understand that a confident statement in a memo was actually hotly contested.
Data delivers information. Context delivers understanding. Without understanding, you get sophisticated pattern-matching that sounds authoritative but may be dangerously wrong—especially when decisions carry weight.
Humans Hallucinate Too
We aren't much better at this, honestly.
Tell someone something. They tell someone else. Pass it through ten people. By the end, the story is so distorted the original speaker wouldn't recognize it. The telephone game isn't just for children—it is how institutional knowledge actually breaks down.
We forget. We misremember. We fill gaps with assumptions and call them facts. We compress nuance into simple narratives because complexity is hard to hold. We translate context into artifacts—documents, emails, messages—and lose accuracy with every translation.
The difference between human hallucination and AI hallucination is that humans at least know they are uncertain. Sometimes. AI confidently presents its fabrications as truth because it has no mechanism for admitting uncertainty about organizational specifics.
Both problems point to the same solution: stop relying on broken transmission. Capture context at the source, continuously, and make it retrievable without the degradation that comes from human memory and human translation.
What Has to Change
Would solving context capture force companies to work completely differently?
Yes—but not in the way you would expect. People still have meetings, still make decisions, still communicate. The activities look similar. But the nature of work shifts fundamentally: from creating artifacts to having conversations that AI turns into artifacts when needed.
Right now, context lives in heads (locked away), scattered files nobody reads (fragmented), message threads that scroll into oblivion (temporary), and tribal knowledge passed through hallway conversations (degraded). For AI to have genuine organizational context, that has to shift from implicit to explicit.
But anything requiring behavior change fails. "Please document everything in this new system" goes nowhere. People have actual work to do. They won't maintain a second system just so AI can benefit.
What works: passive capture. If AI can watch and structure context from existing workflows—meetings already happening, messages already sent, documents already written—organizations don't need to change behavior. They just need to permit access.
The real shift is about governance, not process. Who sees what in the organizational memory? How do you handle sensitive context? What happens when AI understands things certain humans don't? How do you verify the AI's contextual grasp is accurate? Trust and permissions become the hard problems.
The winners won't restructure everything around AI. They will make capture so frictionless it happens naturally—then build AI that actually uses what's been captured.
What Happens Next
Assume context capture gets solved. What breaks? What improves?
Talent dynamics invert. Onboarding shrinks from months to days—a new hire asks why something works a certain way and gets the real answer with full history. When the senior architect leaves, her context doesn't vanish with her. The value of tenure drops because institutional knowledge stops being an individual's moat.
Structure flattens. Much of middle management's worth is context routing—knowing who knows what, connecting the right people to the right questions. AI handles that. Flatter organizations become viable at scales they couldn't reach before. Cross-functional work improves because the "we don't understand what that team does" barrier shrinks.
Decisions get sharper. Fewer repeated mistakes—"we tried this in 2019, here's why it failed" surfaces on its own. Better pattern recognition across departments. Faster pivots when strategy needs to change.
Uncomfortable implications exist too. Where is the line between organizational memory and surveillance? Information asymmetry is power—some people benefit from being the bottleneck. Democratizing context threatens existing hierarchies. And certain roles get exposed: if your value was mostly "knowing how things work around here," that value just became automatable.
The deeper shift: organizations become more like organisms with memory rather than loose collections of individuals who happen to remember different fragments.
Artifacts Become Outputs
Follow the logic further. If AI has organizational context and access to relevant data, we stop needing to create artifacts ourselves.
Documents, tickets, code, slides—these are how humans externalize thinking for other humans. We write specs so others understand requirements. We create tickets so work gets tracked. We write code so machines execute intent. Most of this is translation work: converting context and intent into formats others can consume.
If AI already holds the context, translation becomes optional.
Why write a requirements document if AI knows the requirements, constraints, history, and stakeholders? Why create a ticket if AI understands what needs doing and can execute or delegate directly? Why take meeting notes if AI attended and remembers everything?
What vanishes: status updates, internal documentation, boilerplate code, handoff documents, meeting summaries. What persists: contractual artifacts with legal standing, creative work where authorship matters, approval checkpoints where humans validate outputs, and external communication for people outside the context bubble.
Work shifts from creation to validation. Instead of think-write-review-revise-ship, it becomes: express intent, AI drafts, human validates, ship. Meetings become more valuable as high-bandwidth intent transfer. Thinking matters more than typing. The busy work that is really just artifact production evaporates.
Back to Conversation
All roads lead here.
The irony of current work: we think in concepts and relationships, we communicate through dialogue, then we force everything through artifacts because that was the only way to make knowledge durable and shareable.
Artifacts aren't natural. They are a compression format. We invented them because memory fails, because others weren't in the room, because async required it, because machines needed structured input.
If AI captures context from conversation and generates artifacts on demand, the workflow inverts.
The old model:
Talk → Human creates artifact → Artifact gets consumed
The new model:
Talk → AI absorbs context → Artifact appears when needed
You speak. Think out loud. Debate. Decide. The document becomes an output, not the work itself.
This resembles how pre-literate cultures operated. Oral societies had people whose job was remembering—historians, elders, storytellers. Knowledge lived in dialogue and narrative, not files. We aren't going backward. We are fusing oral culture's strengths (natural, high-bandwidth, relational) with written culture's strengths (durable, searchable, shareable).
The interface to organizational intelligence becomes conversation. Not dashboards. Not document editors. You ask, discuss, decide. AI handles the rest.
The Thesis
This reasoning journey started with a question about context and arrived somewhere unexpected.
Context is the real asset, not documents. Today's AI has knowledge but no organizational memory. Capturing context needs to be frictionless—no behavior change required. Once AI holds context, artifacts become generated outputs rather than primary labor. Work returns to conversation, its most natural form.
This isn't AI that does existing tasks faster. It is infrastructure for a fundamentally different way of working. Organizational memory as a primitive. The foundation for everything else.
We reached this conclusion through first principles, not by starting with a pitch and working backward. Each question pushed the logic forward until the destination became inevitable.
One final point: this document proves its own thesis. A conversation happened. The thinking occurred in dialogue. Then, at the end, an artifact materialized—not as the work, but as a byproduct of work already completed through the most natural form of human communication.
The conversation was the work. This is the receipt.