How I Actually Use Claude Code (After 400+ Sessions)
This Isn't a Hype Piece
I've been using Claude Code for months across every context I work in - agency client projects, indie apps I ship under my own label, and personal experiments. Hundreds of sessions. Real production code. Real deadlines.
Most writing about AI coding tools falls into two camps: breathless "10x developer" evangelism, or dismissive "it only writes fizzbuzz" skepticism. Neither matches my experience. The reality is more interesting and more nuanced than both.
Here's the honest version.
What I Do vs What Claude Does
This is the most important thing I can tell you about using AI coding tools, and it's the thing most people get wrong when they watch someone use Claude Code.
It looks like I type a prompt and the computer does my job. That's not what's happening.
What I Do (The Hard Parts)
- Problem investigation - Reading the ticket, reproducing the issue, understanding the root cause, exploring the codebase
- Solution design - Deciding what to build, how to approach it, which patterns to follow, what trade-offs to accept
- Providing context - Explaining the business logic, pointing to the right files, describing edge cases, clarifying behavior
- Steering the implementation - Reviewing Claude's plan before it writes a line of code, catching when it goes off track, redirecting mid-task
- Code review - Reading every line Claude produces, verifying correctness, checking it matches the team's patterns
- Manual testing - Running the app, clicking through flows, testing edge cases, verifying against actual requirements
- Quality judgment - Is this good enough? Does it introduce tech debt? Is it the right abstraction? Will the team understand this code?
What Claude Does (The Mechanical Parts)
- Writing the actual code - Components, functions, styles, tests based on my direction
- Boilerplate and scaffolding - File setup, imports, type definitions, test structures
- Formatting and templating - PR descriptions, commit messages, repetitive documentation
- Repetitive transformations - Renaming across files, updating patterns, migration scripts
- Build validation - Running builds, type checks, linting after changes
The Ratio
On a typical ticket, the split is roughly 70% me, 30% Claude. But that 30% used to take 50-60% of my time because writing code is slow and tedious compared to thinking about code.
That's the actual productivity gain. Not "AI does my job" - more like "I spend my time on the parts that actually require a senior developer's judgment."
Think of it like pair programming where your partner is an extremely fast typist with good pattern recognition but no product sense, no judgment about trade-offs, and a tendency to confidently do the wrong thing if you stop paying attention.
The Setup That Makes It Work
After months of iteration, I've landed on a system with five layers. The technical details here are specific to Claude Code, but the principles apply to any AI coding tool.
Layer 1: Global Rules
A global configuration file that applies to every project. This is where my hard rules live - things that should never happen regardless of context.
# Hard Rules
## Destructive Git Commands Are Forbidden
Never run without explicit permission:
- git checkout -- on any file
- git restore on any file
- git reset in any form
- git clean
Do not revert uncommitted changes for any reason. Report the
issue and ask how to proceed.
## File Safety
Never delete files without moving them to a safe location first.
Ask before deleting anything not explicitly tracked in git.
## Honesty About Untested Code
Do not claim code works if it has not been executed.
Always state: "I have not tested this yet."
Every one of these rules exists because I lost work or had to fix a mess. More on that in a moment.
Layer 2: Workspace Awareness
I work across multiple repositories - agency clients, indie products, personal projects. A workspace-level config tells Claude how the directories are organized so it doesn't confuse one project with another.
## Directory Structure
code/
├── work/agency/ # Client projects (multiple repos)
├── indie/ # My own products (shared workspace)
└── personal/ # Experiments and personal site
Small thing, but it prevents Claude from referencing patterns from Project A when working on Project B.
Layer 3: Team Conventions
For my agency work, there's a layer that encodes team-specific standards:
## Pull Request Template
All PRs must follow the standard template:
- Ticket link
- Summary (Problem / Solution / How it works)
- Testing / QA Steps
- Acceptance Criteria
- Author Checklist
## Ticket System Codes
| Team | Code | Example |
|------|------|---------|
| Client A | CLI-A | CLI-A-123 |
| Client B | CLI-B | CLI-B-45 |
This means when I create a PR, Claude already knows the format, the ticket linking convention, and the team structure. No re-explaining every session.
Layer 4: Project-Specific Context
Each project gets its own config with architecture, build commands, and current focus:
## Architecture
- Next.js 14 with App Router
- Sanity CMS v3 for content management
- SCSS modules with shared mixins
- i18n with 5 supported locales
- ISR with 60s revalidation
## Build Commands
- Build: npm run build
- Types: npx tsc --noEmit
- Lint: npm run lint
## Current Focus
- CMS migration (94% complete)
- Navigation refactor
- Third-party booking integration
Layer 5: Cross-Session Memory
Claude Code can persist learnings across sessions. When it discovers a pattern or I correct a mistake, it remembers for next time. This compounds over weeks - each session starts a little smarter about the project than the last.
Why Layers Matter
When I open Claude Code in a client project, it automatically loads: my safety rules + workspace awareness + team conventions + project architecture + previous learnings. No preamble, no re-explaining. We pick up where we left off.
The upfront investment in writing these configurations pays back on every single session.
Things That Went Wrong (And the Rules They Created)
I promised honesty. Here it is.
Claude Deleted My Work-In-Progress Code
Early on, Claude decided to "clean up" files it determined were unused. It removed work-in-progress code that wasn't committed yet. Gone.
The rule it created: Never delete files without moving them to /tmp first. Never run destructive git commands without explicit permission. Never revert uncommitted changes for any reason.
I now have this at the top of every workspace config, in bold, in caps:
# CRITICAL WARNING
NEVER DELETE FILES OR REVERT CHANGES WITHOUT BEING ASKED.
Subtle it is not. But it works.
Claude Told Me Code Worked When It Didn't
Claude will say "this should work" with absolute confidence when it hasn't executed a single line. It writes code that looks correct, that reads as correct, but that breaks in ways you only discover when you run it.
The rule it created: Claude must always say "I have NOT tested this - please run it to verify." Must warn about assumptions: "This assumes X exists." Must never claim code is "ready" or "complete" without execution.
This is maybe the most important lesson for anyone using AI coding tools: writing code and testing code are different things. Claude is good at writing. It cannot test. You still have to verify everything.
Claude Pushed Code Without Asking
Self-explanatory. Now there's a hard rule: never push to remote without explicit permission.
Claude Added AI Attribution to Commits
I don't want "Co-Authored-By: Claude" or "Generated with AI" in my commit history. The code is mine - I designed the solution, reviewed every line, and take responsibility for it. Claude typed it faster than I would have. That doesn't change the authorship.
The rule: No AI attribution in commits, PRs, or any output.
Custom Commands: Encoding Workflows
Beyond configuration, I've built 11 custom commands that map to my actual workflow. These are markdown files that define a multi-step process Claude follows.
The Ticket Workflow
My most-used command takes a ticket ID and provides structure for the entire implementation cycle:
/do-ticket PROJ-123
The flow:
-
Context Gathering - Claude explores the codebase. I provide the business context, explain the problem, point to relevant files, and clarify acceptance criteria.
-
Planning - Claude proposes an approach. I review it, push back on things that won't work, suggest alternatives, and approve or redirect before any code gets written.
-
Implementation - Claude writes code in atomic commits. I'm watching - reading diffs, catching issues, course-correcting: "No, use URL params not local state - it needs to be shareable." "Use the existing hook from shared/ instead of writing a new one." "The query is missing a filter, that'll pull in wrong data."
-
Validation - Claude runs builds and type checks. I test manually - running the app, verifying the UI, checking edge cases.
-
PR Prep - Claude generates the PR from our team template. I review and edit before it's created.
The command provides structure. I provide the judgment.
Auto-Generated Standups
/standup
Scans all my project directories for recent git activity, categorizes into completed/in-progress/blocked, and outputs a standup. I edit it - Claude gets the facts from commits but doesn't know what was actually hard or what I'm blocked on.
Structured Code Reviews
/code-review #234
Claude runs through a 40+ point checklist: correctness, security, performance, code quality, and project-specific patterns. But I still read the PR myself. Claude catches mechanical issues I might miss (unused imports, missing error handling). I catch things Claude can't judge - does this approach make sense for the product? Is this the right abstraction? Will the team understand it?
The two reviews together are better than either alone.
Other Commands I Use Regularly
| Command | What It Does |
|---|---|
/pr | Generates PR from current branch using our team template |
/estimate-ticket | Scope analysis with story point estimation |
/deploy-check | Pre-deploy validation (build, types, lint, debug code scan) |
/audit | Project health check (deps, security, bundle analysis) |
/sanity-schema | CMS schema generation and auditing |
Building Your Own
Commands are just markdown files. Here's the minimal structure:
# Command Name
What this command does.
## Input
$ARGUMENTS
What the user provides (ticket ID, PR number, etc.)
## Phase 1: First Step
What Claude should do here...
## Phase 2: Next Step
And here...
## Reminders
- Don't push without asking
- Don't modify unrelated files
- Say when code is untested
Save it in your commands directory and it's available as /command-name. The barrier to entry is writing a markdown file.
Extended Tooling: MCP Servers
MCP (Model Context Protocol) servers extend what Claude can access beyond just reading files.
What I Use
Browser automation - Claude can interact with a running app through Playwright. Useful for verifying UI changes, filling forms, and navigating flows programmatically.
Documentation lookup - Instead of relying on training data that might be outdated, Claude fetches current, version-specific library docs on demand. When I add "use context7" to a prompt about a library API, it pulls the actual docs instead of guessing.
Semantic code navigation - Symbol-level navigation across large codebases. "Find all usages of this type" across the entire project.
These aren't strictly necessary - Claude works fine without them. But they reduce the number of times I have to say "no, that API changed in v3, check the docs."
Permission Management
I use an explicit allowlist. Claude can only run commands I've pre-approved:
Allowed:
Git: add, commit, log, status, diff, branch
Build: npm run build, npm test, npm run lint
Tools: tsc, vitest, playwright test
GitHub: gh pr, gh issue
NOT allowed (requires approval each time):
Git: push, force operations, rebase
Destructive: rm, git clean, git reset
Network: arbitrary curl/wget
I started with "allow everything, deny dangerous things." That was backwards. Now I allow only what's needed. If Claude needs something new, it asks, I approve once, and it gets added to the allowlist.
Per-project overrides let me add project-specific permissions. A monorepo using pnpm gets pnpm commands. A project with Docker gets docker commands. Each project gets exactly what it needs and nothing more.
The Sound Effect (Seriously)
This is a small thing but it changed how I work:
{
"hooks": {
"Stop": [{
"hooks": [{
"type": "command",
"command": "afplay /System/Library/Sounds/Blow.aiff"
}]
}]
}
}
When Claude finishes a task, my Mac plays a sound. That's it. But it means I can switch to manual testing, reading tickets, or reviewing PRs while Claude writes code, and I know when to come back and review the output. It turns an interactive process into an asynchronous one for the typing-heavy portions.
What I Use This Across
I work across three contexts, and the setup adapts to each:
Agency Client Work
Large-scale web applications for major brands - restaurants, wellness platforms, retail. Next.js, Sanity CMS, complex i18n, performance-critical. The team conventions layer and PR template automation are essential here. Everything needs to be production-quality and follow team patterns.
Indie Products
macOS apps, SaaS tools, mobile apps under my own label. Swift/SwiftUI, Electron, React Native, Next.js. The pace is faster here - I'm shipping MVPs, so the focus is on getting to a working product quickly. Custom commands for release workflows and app store submission help here.
Personal Projects
Experiments, learning, portfolio. Less structure needed, more exploration. Claude is most useful here for rapid prototyping - getting a working version fast so I can evaluate whether an idea is worth pursuing.
The layered config system means the same tool adapts to all three without me reconfiguring anything.
Advice If You're Starting Out
Week 1: The Minimum Viable Setup
- Write a global config with your git conventions and safety rules. At minimum: don't delete files, don't push without asking, don't claim untested code works.
- Add a project config to your main project with architecture and build commands.
- Set up permissions - start with an allowlist of just what you need.
- Add the sound hook. Trust me.
Month 1: Build Your Workflow
- Write one custom command for your most repeated task (PR creation is a good start).
- Layer your configs - team conventions in one layer, project specifics in another.
- Pay attention to what you keep re-explaining to Claude. Every repeated explanation is a config improvement waiting to happen.
The Ongoing Discipline
- Review every line. Claude is fast, not correct. Your review is what makes the code production-quality.
- Test everything manually. Claude can run builds but can't judge if the UI feels right, if the flow makes sense, or if the edge case matters.
- Course-correct early. If Claude's plan looks wrong, stop it before it writes 200 lines in the wrong direction. Steering up front is cheaper than fixing after.
- Update your configs. When you correct Claude about something twice, add it to the config so there's no third time.
Honest Assessment
Where Claude Code genuinely helps:
- Writing boilerplate and scaffolding (components, types, test structures)
- Repetitive transformations across many files
- Formatting PRs, commits, and documentation to match templates
- Catching mechanical issues in code review (missing error handling, unused imports)
- Searching and navigating large codebases quickly
- Writing the straightforward parts of features so I can focus on the tricky parts
Where it still falls short:
- Product judgment (doesn't know what the user actually needs)
- Architectural decisions (proposes patterns that look clean but don't fit the real constraints)
- Knowing when to stop (will happily over-engineer a simple feature if you let it)
- Testing (can write test code but can't actually verify behavior)
- Confidently wrong answers (will tell you code works when it doesn't, every single time, unless you force it not to)
The net effect: I'm faster at shipping. Not because I do less - but because I spend more time on the parts that need a senior developer and less time on the parts that are just typing. The quality bar is the same because I'm still the one setting it.
The Circular Saw Analogy
A circular saw doesn't replace a carpenter. It makes cuts faster so the carpenter can focus on the design, the joinery, the parts that require skill and judgment. You wouldn't hand a circular saw to someone who doesn't know carpentry and expect good furniture. And you wouldn't say the saw built the house.
That's how I think about Claude Code. It's a power tool. The craft is still mine.
If you want to see what I'm building, check out my projects on GitHub or follow the journey on LinkedIn.