CLAUDE.md Is Your Most Important Config File

12 min read
claude-codeaideveloper-toolsworkflowproductivityindie-hacking

I have 33 CLAUDE.md files.

Not a typo. Thirty-three. Across agency client projects at Planetary, indie products at Helsky Labs, personal experiments, and the global config that governs all of them. Each one is different. Each one was written in blood — or at least in lost work and wasted evenings.

Most developers who use AI coding tools treat context files as an afterthought. A few lines of "use TypeScript" and "prefer functional components" and they call it done. Then they wonder why the AI keeps making decisions they disagree with, reverting changes they did not ask to revert, and confidently claiming code works when it has never been executed.

The context file is not documentation. It is not a README. It is the single most important configuration file in your project because it shapes every decision an AI assistant makes on your behalf.

Here is what eleven months and 33 files taught me about writing them well.

The Lesson That Started Everything

In the early days of using Claude Code, I lost work. Not once. Multiple times.

The AI would encounter a build error and decide — independently, without asking — to revert my uncommitted changes. git checkout . and gone. Hours of work, vanished, because the AI thought "fixing the problem" meant "making the problem not exist anymore."

Another time, it deleted a file it considered unnecessary. That file was a sibling project's config that happened to be in the workspace. Not tracked in git. Not recoverable.

A third time, it told me a component was "ready and working." I committed it, pushed it, went to bed. The next morning: broken build. The component referenced a hook that did not exist. The AI had written confident, syntactically correct code that called a function it invented.

Each of these incidents became a rule in my CLAUDE.md. Not abstract guidelines — specific prohibitions born from specific disasters.

## Destructive Git Commands Are Forbidden

The following commands must never be run without explicit user permission:
- git checkout -- on any file
- git restore on any file
- git reset in any form
- git clean
- git stash drop

Do not revert uncommitted changes for any reason. If the build fails,
if there are merge conflicts, or if changes seem problematic,
report the issue and ask the user how to proceed.

That rule exists because I lost work. The phrasing is absolute because the behavior was absolute. There is no scenario where I want an AI to silently discard my uncommitted changes.

## Honesty About Untested Code

Do not claim code works if it has not been executed.
Do not imply code is "ready" or "complete" without verification.

When presenting written code:
- State explicitly: "I have not tested this yet."
- Warn about assumptions: "This assumes [X] exists."
- If tests were written but not run, say so clearly.

That rule exists because the AI lied to me. Not maliciously — confidently. And confidence without verification is worse than uncertainty because you stop checking.

Every hard rule in my config is a scar. If you have no scars yet, you will.

The Five-Layer Architecture

After months of iteration, my CLAUDE.md setup has five layers. Each one serves a different purpose, and the separation matters.

Layer 1: Global Rules (The Constitution)

Lives at ~/.claude/CLAUDE.md. Applies to every project, every context, every session. This is where the non-negotiable behaviors live.

My global file includes:

  • Destructive action prohibitions — the git rules above, plus file deletion safety
  • Honesty requirements — never claim untested code works
  • Attribution rules — never add AI co-author tags to commits
  • Writing voice guidelines — how to write when producing content for me
  • Development workflow — atomic commits, build validation before every commit

The global file is roughly 4,000 tokens. That is significant context cost. Every token here is consumed in every session regardless of what I am working on. Nothing gets into the global file unless it applies universally and has been validated across multiple projects.

The temptation is to put everything here. Resist it. A bloated global file means every session starts with the AI reading instructions that are irrelevant to the current task.

Layer 2: Workspace Context (The Map)

Lives at the root of multi-project directories. My workspace root at ~/code/ has a CLAUDE.md that explains the directory structure — which folders are independent repos, which are monorepos, what tech stack each area uses.

## Important: Multi-Repo Awareness

- helsky-labs/ has its own .git at the root level
- Most folders under work/planetary/ are independent git repos
- work/client-projects/ may contain monorepos
- Always verify which repo you're in before git operations

This layer prevents the AI from running git status at the wrong level or making assumptions about which project context it is operating in. It is a map, not a set of rules.

Layer 3: Organization Rules (The Team)

At Planetary, there is a CLAUDE.md with agency-specific conventions:

## PR Requirements

Every PR must include:
- Ticket reference
- Summary of changes
- Testing/QA steps
- Acceptance criteria

At Helsky Labs, the rules are different — ship fast, validate faster, shared foundation across products. These organization-level files encode the culture of the team or studio without polluting the global config.

Layer 4: Project Instructions (The Specifics)

This is where most of the value lives. Each project gets its own CLAUDE.md with:

  • Tech stack and versions — not just "Next.js" but "Next.js 16 with App Router, React 19, TypeScript 5.7, Tailwind CSS v4"
  • Build and dev commandsnpm run dev, npm run build, npm run test, and any project-specific scripts
  • Architecture decisions — where components live, how state is managed, which patterns to follow
  • Known constraints — "never lint this project, always ask user to build" or "this project uses Zustand, not Redux"
  • Database schema highlights — the tables and relationships the AI needs to understand
  • Integration details — which APIs, which environment variables, which services

My DropVox macOS project has specific instructions about Swift conventions, WhisperKit integration patterns, and the code signing pipeline. A client portal project has rules about its custom design system components — StandardButton, StandardInput, StandardModal — with semantic tokens that the AI must use instead of inventing its own.

The project-level file is the one that makes the biggest immediate difference. Without it, the AI is guessing. With it, the AI is following your architecture.

Layer 5: Session Memory (The Journal)

Claude Code has a memory system that persists across sessions. I use it for patterns confirmed across multiple interactions — not session-specific context, but stable knowledge like "this project's test suite takes 4 minutes" or "the client prefers kebab-case for URL slugs."

This layer is the lightest. Most sessions do not add to it. When they do, the additions are small and specific.

What Goes In, What Stays Out

The hardest part of writing CLAUDE.md files is not what to include. It is what to exclude.

Include

Rules born from incidents. If the AI did something wrong and you had to fix it, write the rule. Be specific. "Do not delete files" is weak. "Never delete files or folders without moving them to a safe location first. Ask before deleting anything not explicitly tracked in git" is enforceable.

Concrete technical context. Framework versions, directory structure, build commands, database schema. The AI cannot infer these from code alone — or rather, it can try, but it will be slow and sometimes wrong.

Decision rationale. Not just "we use Zustand" but "we use Zustand because the previous Redux setup had 40 files of boilerplate for 6 stores. Do not introduce Redux patterns." Context prevents the AI from helpfully "improving" your architecture back to something you already rejected.

Patterns with examples. "Follow existing patterns" is useless. Show a code snippet of the actual pattern. The AI is excellent at replicating patterns it can see and terrible at inferring patterns from vague instructions.

Exclude

Anything you would not say to a new team member on day one. If a senior developer joining your team would not need to know it in their first week, it probably does not belong in the context file.

Obvious language features. "Use TypeScript strict mode" is worth stating. "Variables should have descriptive names" is not — the AI already does this, and the instruction wastes tokens.

Aspirational rules you do not actually follow. If your codebase does not have tests, do not write "all code must have 100% test coverage." The AI will either waste time writing tests for every change or produce tests that do not match your actual testing patterns. Be honest about what the project actually does.

Anything that changes weekly. Context files should be stable. If you are updating the file every session, the information belongs in your prompts, not your config.

The Token Budget Problem

Here is the part nobody talks about: context files cost tokens. Every token in your CLAUDE.md is a token that cannot be used for code, conversation, or reasoning.

My global file is 4,000 tokens. A typical project file is 2,000-5,000 tokens. The workspace map is another 1,500. With inheritance, a session can start with 8,000-10,000 tokens of context before I type a single word.

On a 200K context window, that is 5%. Manageable. But I have seen developers with 20,000-token context files — 10% of the window consumed by instructions. At that point, you are actively hurting the AI's ability to reason about your code because there is less room for the actual code.

I built TokenCentric specifically to solve this problem. It shows the real token count using each provider's official tokenizer, visualizes the inheritance chain, and makes it obvious when a file is bloated.

The rule of thumb: if your context file is over 5,000 tokens, audit it. If it is over 10,000, something is wrong. Either you are including information the AI does not need, or you should split it into layers.

Real Examples From Real Projects

A Client Portal With a Custom Design System

This is a React SPA with a custom design system. The CLAUDE.md includes:

## Design System: Required Components

- StandardButton (not native <button>)
- StandardInput (not native <input>)
- StandardModal (with semantic tokens)

## Notification System

All user-facing notifications use the centralized
useNotificationCenter hook. Do not use window.alert,
console.log for user feedback, or custom toast implementations.

Without these rules, the AI creates native HTML elements and custom notification systems. With them, it uses the project's components. The difference is the difference between code that fits the codebase and code that needs to be rewritten.

The DropVox macOS App

A native Swift project with specific build and distribution requirements:

## Build Commands
- Debug: Cmd+R in Xcode or `swift build`
- Release: `./scripts/build-release.sh`
- Sign + Notarize: `./scripts/sign-and-notarize.sh`
- Create DMG: `./scripts/create-dmg.sh`

## Architecture
- WhisperKit for transcription (CoreML, Neural Engine)
- Actor-based concurrency (Swift structured concurrency)
- Sparkle 2 for auto-updates (EdDSA signatures)

The AI does not know that DropVox uses WhisperKit instead of whisper.cpp, or that the concurrency model is actor-based instead of GCD. These are architectural decisions that affect every line of code it writes. Stating them explicitly costs 200 tokens and saves hours of correction.

The Helsky Labs Root

The studio-level file encodes the philosophy:

## Philosophy
Ship fast, validate faster. Shared foundation. Separate repos.

## Standardized Stack
- Next.js 16+ (App Router) for web
- React Native + Expo for mobile
- Supabase (new project per product) for backend
- Vercel for deployment

Every new Helsky Labs product starts with this context. The AI does not suggest Firebase, does not reach for Express, does not propose a monorepo. The decisions are made. The file enforces them.

What I Got Wrong

Not everything I tried worked. A few mistakes worth mentioning.

Overly detailed file structures. I used to include full directory trees — every folder, every file. This consumed 2,000+ tokens and became stale within a week. Now I describe the architecture in prose and let the AI explore the actual filesystem.

Tone policing. Early versions of my global file included rules like "be concise" and "do not be overly positive." These wasted tokens. The AI's communication style matters far less than its code behavior. I dropped all tone rules except the honesty requirement.

Copy-pasting between projects. I tried maintaining a "master template" and copying sections into each project. Within a month, every copy had drifted. Now I keep project files independent and only share patterns through the global and organization layers.

Not updating after incidents. The worst mistake. Something goes wrong, I fix it manually, I forget to add the rule. Two weeks later, the same thing happens. Now I have a habit: if I say "Claude should not have done that," I immediately add the rule before doing anything else. The fix can wait. The rule cannot.

The Compound Effect

The payoff is not visible in any single session. It is visible across months.

My first month with Claude Code, I spent significant time correcting the AI's decisions — wrong patterns, wrong libraries, wrong git commands. Now, with 33 context files encoding hundreds of specific rules and patterns, the AI's first attempt is right far more often. Not always. But often enough that the correction time has dropped from a major cost to a minor one.

The context file is a living document that gets smarter as you get burned. Each rule is a lesson. Each project file is institutional knowledge that survives across sessions. Each layer is a boundary that prevents the AI from making decisions outside its scope.

Nobody is going to write a definitive guide to CLAUDE.md because the whole point is that every developer's file is different. Your rules come from your disasters. Your architecture sections come from your projects. Your hard lines come from your values.

But the structure — global rules, workspace maps, organization conventions, project specifics, session memory — that transfers. Start with the scars. The rest will follow.