Building with AI
I Built a Second Brain That Actually Works — Here's Exactly How

I've tried every second brain system. Notion databases. Apple Notes folders. Roam graphs. They all die the same way: you start organised, maintenance piles up, you skip it, the system degrades, and six months later you're back to scattered notes across twelve apps. The problem was never the tool. It was the maintenance cost.
Last week I built a knowledge vault in Obsidian using Claude Code that solved this permanently. One session. 125+ files. 112 wiki pages covering 11 clients, 30 people, 49 projects, 8 analysis docs, 6 playbooks. Every page cross-referenced with wiki-links. The Obsidian graph view shows a genuine constellation of interconnected knowledge.
I didn't write a single wiki page myself.
The architecture
The approach follows Andrej Karpathy's LLM Wiki pattern — a framework where an LLM maintains a persistent, structured knowledge base that the human feeds and queries. Credit where it's due: his thinking on this is the foundation.
Three layers:
The first is raw sources — immutable. Articles, call transcripts, brain dumps, email threads. Claude reads from these but never modifies them. This is your evidence layer.
The second is the wiki — Claude-maintained markdown pages. Entity pages for every person, project, and company. Concept pages for technical patterns. Analysis docs. Playbooks. Claude owns this layer entirely. It writes, updates, cross-references, and reorganises.
The third is the schema — a CLAUDE.md file that tells Claude how the vault works. What conventions to follow. How to ingest new sources. How to query. How to lint. This is the most important file in the entire system, and it's the one most people will underinvest in.
The human curates sources and asks questions. The LLM does all the bookkeeping.
The moment it clicked
After hours of ingesting project notes from months of client work, I asked a simple question: "Who is Edward McNally?"
A name I'd half-forgotten from a conveyancing CRM project months ago. Within seconds, the vault told me: he's the contact at a specific law firm, the first firm onboarded to Stripe Connect on the platform, connected to these other people and this project timeline.
That's 30 seconds versus scrolling through months of WhatsApp messages trying to piece together context I'd already lost.
That moment is when a knowledge vault stops being a productivity experiment and starts being infrastructure.
Why most second brains die
The failure mode is always the same. You set up a beautiful system. Tags, folders, templates. For two weeks, you maintain it religiously. Then you skip a day. Then a week. The cross-references fall behind. New notes don't get linked to old ones. The system becomes a graveyard of orphaned pages that you feel guilty about every time you open the app.
The core problem: maintenance is manual, and manual maintenance doesn't survive contact with a busy schedule.
The breakthrough with the LLM Wiki pattern is that Claude does all the maintenance. Reorganising the vault is a prompt. Cross-referencing a new source against 112 existing pages is automatic. A weekly lint pass catches contradictions, orphan pages, and stale claims — written as a report I just read.
The human's only job is to feed it and ask questions. If you do that daily, the system compounds. If you skip a week, no big deal — pick back up. There's no maintenance debt because the maintenance cost is near zero.
What this is not
This is not RAG. There's no vector database, no embeddings, no semantic search layer. The vault is plain markdown files that Claude reads directly.
This is not NotebookLM. It's not uploading documents to a chat interface and asking questions about them. The vault is persistent, structured, and maintained across sessions. It grows over time.
This is not a chatbot with file uploads. The difference is the schema layer — the CLAUDE.md file that gives Claude a structured understanding of how the vault works, what conventions to follow, and how to maintain consistency across hundreds of pages.
It's closer to having a research assistant who maintains a filing system, remembers everything, and never gets tired of cross-referencing.
Cross-engagement insights the vault surfaces automatically
The most valuable thing about a connected knowledge vault isn't finding what you're looking for. It's surfacing connections you weren't looking for.
When I ingested notes from a small automation project, the vault cross-referenced the client with a contact who turned out to be connected to a completely separate engagement through a business partner. What looked like an isolated project was actually a potential gateway to a much larger opportunity. I knew this intuitively, but the vault made the connection visible and permanent.
On another occasion, the vault connected an investor who had spent significant time reviewing a deck — across multiple visits — to a co-founder at the fund he works at. What looked like a cold lead in the analytics was actually a warm relationship signal. The vault caught it because the people pages were cross-referenced.
And after ingesting work across three different client websites, the vault showed that I'd independently developed the same methodology for each. I hadn't consciously recognised the pattern. The vault turned it into a documented playbook — a productisable service offering that was hiding in plain sight.
The daily workflow
Three actions, repeated:
Ingest: drop a source — an article, a call transcript, an email thread, a brain dump. Claude reads it, discusses key points, writes a summary, updates every relevant wiki page, and tells me exactly what changed.
Query: ask the vault anything. Claude reads the index, drills into relevant pages, answers with citations, and files substantive answers back into the wiki so they compound. The vault gets smarter every time you ask it a question.
Lint: once a week, Claude reads every page and writes a health report — contradictions found, orphan pages identified, stale claims flagged, missing cross-references suggested. The cleanup runs automatically on Sunday evenings.
What I'd tell someone starting today
Don't try to backfill a year of content on day one. Start lean. Ingest what's live right now — today's call notes, this week's project updates. Let it grow organically.
The schema file is the most important file in the vault. Spend real time on it. It's the difference between a knowledge base and a pile of markdown.
Obsidian is the window. Claude Code is the brain. You barely need to learn Obsidian — the graph view is nice, the search is fast, but the real value is in what Claude maintains behind the scenes.
The habit of feeding it daily matters more than the structure. The LLM handles structure. Your job is to give it raw material.
The vault gets useful around day three. It gets indispensable around day fourteen. By day thirty, you'll wonder how you operated without it.
The real point
Everyone's talking about AI agents for coding, content, and automation. Almost nobody is talking about AI as a knowledge maintenance system — something that keeps your entire professional context organised, cross-referenced, and queryable without you lifting a finger.
The vault is only as good as what you feed it. But the maintenance cost is near zero, and the compound value of a system that remembers everything, connects everything, and never lets context decay is something I haven't found in any other tool.
125 files. 112 wiki pages. One session to build. Zero pages written by hand.
The second brain that works is the one you don't have to maintain.
More blogs

Apr 13, 2026
Shopify Just Made Every Storefront AI-Agent-Ready. Here's Why That Matters.
You can now switch between Friendly, Formal, and Bold tones with a single click inside the prompt editor.

Apr 11, 2026
I Built a Second Brain That Actually Works — Here's Exactly How
You can now switch between Friendly, Formal, and Bold tones with a single click inside the prompt editor.

Mar 30, 2026
If You’re Coding Without Agent Coordination, You’re Shipping Bugs.
You can now switch between Friendly, Formal, and Bold tones with a single click inside the prompt editor.

Let's build something.
I'm always up for a conversation with founders and teams who want to ship faster.