I Coded More in February Than in All of 2024
The numbers don't make sense.
I built an analytics dashboard for my brag book last week — a way to track commits, pull requests, shipped features, and lines touched across all my projects. The kind of thing you build on a Sunday because you're curious, and then stare at for an hour because the data says something you didn't expect.
February 2026 — one month — produced more output than all of 2024. Twenty-eight days beat twelve months. Not by a small margin. By a margin that made me rerun the query because I assumed the date filter was broken.
It wasn't broken.
And February nearly matched the entirety of 2025.
I sat with that for a while. The obvious explanation is "AI tools." And that's partly right. But it's not the whole story, and the whole story is more interesting — and more personal — than a productivity blog post about prompt engineering.
The Dashboard That Started a Conversation
The brag book started as a personal tracking tool under Helsky Vault. Nothing fancy — a dashboard that aggregates my activity across GitHub repos, project milestones, and shipped features. I built it because I'm terrible at remembering what I accomplished last Tuesday, let alone last quarter.
The trend line isn't gradual. It's not a steady climb from 2024 through 2025 into 2026. There's a visible inflection point around late 2025, when I started using Claude Code as a daily driver. And then February 2026 goes vertical.
But correlation is easy. Causation is the interesting part.
The AI Multiplier — But Not the Way You Think
Yes, I use AI coding tools daily. Claude Code handles a substantial portion of the work that used to eat my evenings — boilerplate, repetitive refactors, build configs, test scaffolding. I've written about this before.
But here's what the productivity discourse misses: the AI doesn't do the thinking. It does the typing.
The bottleneck in my work was never keystrokes per minute. It was never "I know exactly what to build but my fingers are too slow." The bottleneck was the gap between decisions. Between "I need an API endpoint for X" and that endpoint existing. Between "this component should handle Y edge case" and that handling being implemented and tested.
AI compressed that gap from hours to minutes. And when you compress the gap between decisions, something unexpected happens — the decisions themselves get faster. Because you're not losing context. You decide, it exists, you evaluate, you decide again. The feedback loop tightens to the point where flow state becomes the default, not the exception.
That's the real multiplier. Not "AI writes code for me." But "AI keeps my brain in the zone where the good decisions happen."
The Part Nobody Talks About
I'm twice exceptional.
If you're not familiar with the term — 2e describes individuals who are both intellectually gifted and neurodivergent. The "twice" is literal: exceptional capacity in some areas, genuine disability in others. In my case, ADHD. The clinical kind, not the "I get distracted by TikTok sometimes" kind.
Here's the gifted side: a brain that processes information fast, holds large amounts of context simultaneously, and switches between contexts with almost zero ramp-up time. I can look at a system architecture and see the failure modes before anyone's finished the whiteboard drawing. I can hold six parallel conversations and track the state of all of them.
Here's the ADHD side: I can also stare at a blank editor for forty-five minutes because the task isn't stimulating enough to activate my brain. I can know exactly what needs to be done — have the entire solution mapped in my head — and physically not start. Procrastination that isn't laziness. It's a dopamine gate that doesn't open on command.
And then there's the fun combo platter. Perfectionism that paralyzes — because the gifted side sees the ideal solution and the ADHD side can't tolerate the boring steps between here and there. Anxiety that compounds — because you know you're capable of more, you can see the gap between your potential and your output, and that gap feels like a personal failing every single day. The imposter syndrome hits different when you genuinely can't explain why yesterday you architected an entire system in three hours and today you can't write a single function.
For most of my career, those traits were as much liability as advantage.
The fast processing meant I'd arrive at solutions before I could articulate them — which made me look impulsive in code reviews. The context-holding meant I'd track six conversations simultaneously while appearing unfocused in all of them. The context-switching meant I'd start three features in a morning and finish zero by afternoon — not because I lost interest, but because each one sparked a better idea that my brain chased without permission.
The procrastination was the worst part. Entire days lost to the wrong kind of friction. Not hard problems — hard problems are stimulating, those I'd hyperfocus on for twelve hours straight. The killer was medium problems. Boring-enough to not engage the dopamine system, important-enough that they couldn't be ignored. So they'd sit there. And I'd sit there. And the anxiety would build because I knew I was wasting time and couldn't make myself stop wasting it.
If you've never experienced executive dysfunction, I genuinely don't know how to explain it. Imagine knowing exactly what to type, having the keyboard in front of you, wanting to type it, and your hands just... don't. It's not a motivation problem. It's a neurochemistry problem wearing a motivation costume.
Traditional development didn't reward that cognitive profile. You sit at one file. You type. You wait for the build. You type more. One thing at a time, sequentially, like a well-behaved assembly line. The fast brain doesn't help when the bottleneck is the compiler. And the ADHD brain actively suffers when the work is sequential, slow, and offers no feedback until the end.
Then agents showed up.
Orchestrating Instead of Typing
Here's what a development session looks like now.
I might have multiple Claude Code agents running in parallel. One is scaffolding a database migration. Another is writing tests for a component I finished an hour ago. A third is refactoring an API endpoint while I review the output of the first. I'm reading the results from one, course-correcting another, and queueing up work for a fourth.
That's not multitasking. Multitasking is a myth — for most people.
What it is — and this is the 2e connection — is parallel orchestration. I'm not doing three things at once. I'm directing three things at once. Each agent needs context, decisions, architectural guidance, course corrections. My brain processes the incoming results fast enough to keep all three moving without losing the thread of any one of them.
The same trait that made me look scattered in meetings — holding too many contexts, jumping between ideas, processing faster than I could communicate — turned out to be exactly the cognitive profile for managing multiple AI agents.
Think about what an agent orchestrator actually needs: the ability to hold the full picture of multiple parallel workstreams, switch between them without context loss, process new information quickly enough to provide real-time direction, and maintain a mental model of how the pieces fit together.
That's not a job description. That's how my brain has always worked. It just never had enough hands.
Now it does.
I can finally "multitask." Not because I changed. Because the tools finally caught up to how my brain already worked.
That's not nothing.
The Craft Still Matters — More Than Ever
Here's where I refuse to write the "AI will replace developers" paragraph.
I'm currently re-reading The Pragmatic Programmer. Not for nostalgia — because the principles hit harder now than they did the first time. DRY, orthogonality, tracer bullets, the power of plain text — these concepts are about thinking, not typing. AI handles typing. It does not handle thinking.
Getting exceptional results from AI coding tools requires being exceptional at the things AI cannot do:
Architecture. AI will generate any architecture you describe. It has no opinion about whether that architecture is right for your problem. It will happily build a microservice for a todo app. It will gleefully add Redux to a three-page site. The human decides what to build. The AI decides how to express it in code.
Design. Not pixels — systems. Which abstractions serve the problem? Which layers need to exist? Where should the boundaries be? AI generates code inside boundaries. It doesn't know where boundaries should go.
Product sense. Knowing which feature to build next, which corner to cut, which "nice to have" is actually "ship it without." This is business judgment, not technical skill. AI has exactly none of it.
Taste. The hardest one to articulate. Knowing when code is "right" — not just functional, but elegant, maintainable, appropriately complex for the problem at hand. AI generates code that works. A senior developer knows when working code is still wrong.
The developers who will thrive with AI tools are the ones who were already good at the parts AI can't do. Experience didn't become less valuable. It became the only differentiator.
If you're using Claude Code to write a React component, you'll get a React component. If you're using it while understanding component architecture, accessibility patterns, performance implications, and design system constraints — you'll get a component that belongs in a production codebase.
Same tool. Different operator. Wildly different results.
I'm Finally a Full-Stack Developer
I wrote a blog post in January about my eleven-year journey from frontend to full-stack. The honest version of that post was: "I'm almost there, but the backend still makes me hesitate sometimes."
February settled it.
This month alone I shipped database migrations, API endpoints, authentication flows, infrastructure configs, a full analytics dashboard, and the frontend to tie it all together. Across multiple projects. In multiple languages.
Not because I suddenly mastered backend development in four weeks. But because AI compressed the learning curve in a way I didn't think was possible. When I didn't know the right Postgres index for a query pattern, I described the access pattern and got an explanation with the solution. When I needed an auth flow I'd never implemented from scratch, I sketched the architecture and the implementation appeared — and I could read it, evaluate it, and know whether it was right, because I understood the principles even when I hadn't memorized the syntax.
The gap between "I know conceptually how this works" and "I can ship this in production" — that gap used to be measured in months of ramp-up time. Now it's hours.
Eleven years of frontend made me a specialist. AI made the specialization portable. I understand how web applications work from database to browser. I always did. I just couldn't type fast enough in every layer.
Now I can.
The Vulnerable Part
The 2e thing is not something I talk about casually.
In Brazilian culture — and in tech culture more broadly — neurodivergence is either romanticized or pathologized. "Your ADHD is a superpower!" on one end. "Have you tried just focusing?" on the other. Neither frame is useful. It's a cognitive profile. It comes with trade-offs that are real and sometimes brutal.
Some months the fast processing means I ship more in four weeks than I shipped in a year. Other months it means I start fourteen things, finish one, and spend the last week of the month wondering what happened to the other thirteen.
The reason I'm sharing it here is because the narrative around AI productivity leaves something out. The hot takes are all "tools got better, everyone's more productive now." That's not wrong, but it's incomplete. The tools got better in ways that happen to match certain cognitive profiles more than others.
My 2e brain didn't become more productive because AI is a universal multiplier. It became more productive because AI agents are a specific multiplier — for people who think in parallel, hold multiple contexts without strain, and make decisions faster than they can implement them.
That's not everyone. And that's fine. Different brains will find different leverage points in these tools. But if something in this post sounds familiar — the fast processing, the context juggling, the frustration of arriving at the answer three steps before your hands catch up — you might be about to have a very good couple of years.
What February Actually Means
February didn't beat 2024 because I worked harder. I didn't work more hours. I probably worked fewer.
It beat 2024 because the friction between thought and implementation dropped to nearly zero. Because the tools matched my cognitive profile in a way nothing else ever has. Because eleven years of accumulated judgment — architecture, design, product sense, taste — finally had an execution engine that could keep pace.
The dashboard numbers are just the surface. What they measure — commits, PRs, features shipped — is output. What they don't measure is the quality of decisions behind that output, the depth of understanding, the satisfaction of finally working the way my brain always wanted to work.
I spent years feeling like I was driving a sports car in a school zone.
The road opened up.