We're Not Getting Better at Coding. We're Getting Better at Delegating.
After tracking over a hundred developers with FlouState, I noticed something that changed how I think about the future of programming. It started with a simple shower thought that wouldn't leave me alone.
The Shower Thought That Started Everything
“You know when you're debugging and you have to trace multiple paths - some shallow, some deep? That used to be your brain's job. Now it's AI's. We're not getting better at debugging. We're getting better at delegating.”
That was it. A random Tuesday morning realization that felt both obvious and profound. We're not becoming better debuggers. We're becoming better delegators.
My 3-Hour Debug Session That Changed Everything
Tuesday night, November 18th. An error reporting service flagged that FlouState occasionally failed to save their tracking data. Not always. Just... sometimes.
I dug into the logs. The error was cryptic:
Error: SQLITE_BUSY: database is lockedThis was happening in the VS Code extension when it tried to save offline tracking data to local storage. Classic race condition nightmare:
- Only happened under heavy usage
- Intermittent (maybe 1 in 100 saves)
- Multiple async operations writing to the same SQLite database simultaneously
- VS Code's globalState API doesn't queue writes automatically
I did exactly what my own data predicted I would do. I ignored the sophisticated debugger tools available to me - well, mostly. I set maybe a few breakpoints, looked at variables, gave up, and went back to my comfort zone: console.log.
console.log("Before globalState.update");
await context.globalState.update(key, value);
console.log("After globalState.update - success!");Then I tried to reproduce it by hammering the extension with rapid-fire events.
Two hours of failed reproduction attempts later, I had logs everywhere and still couldn't trigger it reliably. The race condition was too subtle - it only happened when VS Code's internal SQLite database was already busy with another write.
At 10 PM, frustrated, I pasted the OfflineBufferManager code into an LLM and explained: “Users are getting SQLITE_BUSY errors intermittently. Multiple things write to globalState. How do I prevent concurrent writes?”
45 seconds later, it responded: “Create a write queue that serializes all storage operations. Here's the pattern.”
It even generated a complete StorageManager class with a singleton pattern, sequential write queue, and proper error handling.
I stared at the solution. Two hours of manual debugging vs 45 seconds of delegation.
Is This Just Me, or Is This Everyone?
I started asking around. Turns out, I'm not alone. Almost every developer I talk to has a version of this story. The 6-hour bug that AI solved in 2 minutes. The codebase they understood in 10 minutes instead of 3 days.
Here is the nuance: I have hard data showing we aren't using debuggers anymore (that's the 1% stat from my previous post). What I don't have yet is the hard data comparing pre-AI vs post-AI workflow patterns to prove exactly where that time went.
But based on my own workflow and conversations with dozens of developers, I have some strong hypotheses.
My Working Hypotheses
Hypothesis 1: The Exploration And Debugging Collapse
Before AI: I'd spend 30+ minutes wandering through dependency chains, reading documentation, tracing function calls just to understand how a single feature worked.
With AI: I paste a file and ask “what does this do?” Instant context. No more archaeological expeditions through codebases.
What I'm measuring: Is exploration time actually dropping? By how much? Are there trade-offs in depth of understanding?
Hypothesis 2: The Creation Explosion
When you're not spending time figuring out how existing code works, you're writing new code. My gut says my creating vs exploring ratio has completely flipped since I started using AI for everything.
What I'm measuring: Time spent creating vs exploring+debugging, month over month. Quality of code created with vs without AI assistance.
Hypothesis 3: The Delegation Skill Gap
The developers who adapt fastest aren't necessarily the best coders. They're the best communicators. They know how to context-set, prompt efficiently, validate solutions quickly, and recognize when AI is confidently wrong.
What I'm measuring: Can I quantify “delegation skill”? Does it correlate with productivity?
The Evolution of Developer Debugging
Level 1: Manual Everything
1970s-1990s
Print statements. Core dumps. Reading hex. Every bug was a murder mystery where you were both detective and forensics team.
Level 2: Better Tools
2000s-2010s
IDEs with integrated debuggers. Breakpoints. Watch variables. Stack traces. We built powerful tools... that 75% of developers never use.
Level 3: Delegation EraNOW
2020s - Present
AI explores the paths. We describe the problem. The machine does the traversal we used to do manually.
Level 4: ???
What's Next
AI prevents bugs before they exist? Self-healing code? We become architects of intent rather than authors of implementation?
This Isn't About Debugging. It's About Everything.
The delegation pattern extends beyond debugging:
Documentation → AI Explains Code
- • Before: Read 500 lines to understand a module
- • Now: “Explain what this service does”
- • Time saved: 15-20 minutes per exploration (or hours if there's no documentation)
Boilerplate → AI Generates Structure
- • Before: Copy-paste-modify from old projects
- • Now: “Create a React component with TypeScript and tests”
- • Time saved: 10-30 minutes per component
Code Reviews → AI Pre-Screens
- • Before: Senior devs spend hours reviewing
- • Now: AI catches obvious issues first
- • Human focus: Architecture and business logic
Learning → AI as Tutor
- • Before: Stack Overflow diving for hours
- • Now: “Why doesn't this work?” with full context
- • Time to understanding: 5x faster
Where AI Actually Works (And Where It Fails)
Let me be specific. After a year of building with AI, here's the reality:
Where AI Shines
- ✓Isolated bugs in single modules
(my race condition example)
- ✓Closure problems with clear repro steps
Race conditions, async bugs
- ✓Syntax & type errors
Obvious logic bugs
- ✓Explaining unfamiliar code
React hooks, complex patterns
- ✓Generating boilerplate
API endpoints, test scaffolds
Where AI Struggles
- ✗Complex system interactions
“Why does the payment webhook fail when email is under load?”
- ✗Performance without clear signals
“The app feels slow” with no profiler data
- ✗Non-deterministic bugs
99% fine, 1% crashes. Can't reproduce.
- ✗Business logic validation
“This works but feels wrong for our users”
- ✗Architecture trade-offs
Database sharding for 10M users with complex queries
- ✗Debugging AI-generated code
The irony is real
The Pattern:
AI excels at well-defined problems with clear inputs. It struggles with ambiguous, multi-layered, context-heavy problems.
The Uncomfortable Truth About Our Future Value
Here's what keeps me up at night: if AI can traverse debugging paths, explore codebases, and write boilerplate, what's uniquely human about development?
After years of building with AI and watching how my role has shifted, I think it's this:
1. Knowing What to Build (Not How)
AI can implement any feature you describe. But it can't tell you which features actually matter.
Users will ask for fancy dashboards and elaborate analytics. AI will happily build all of it. But should you?
The hard part isn't the code. It's figuring out which 20% of features will solve 80% of real problems. That requires talking to users, understanding their context, and saying no to good ideas in favor of great ones.
AI optimizes for completeness. Humans optimize for impact.
2. Understanding Why It Matters (Business Context)
AI can generate perfect code for the wrong problem. Understanding product-market fit, user psychology, competitive positioning - that's still human work.
3. Deciding Trade-offs (Perfect vs Shipped)
Should we refactor this mess or ship the feature? AI will always choose “refactor properly.” But sometimes good enough shipped beats perfect delayed.
4. Recognizing When AI Is Wrong (Judgment)
AI is confidently wrong a lot. Knowing when to trust it vs when to dig deeper is a learned skill that requires experience.
5. Connecting Disparate Systems (Integration)
“Make the auth service talk to the payment gateway while respecting GDPR and handling the edge case where...” AI can help with pieces, but orchestrating complex integrations still needs human judgment.
We're transitioning from implementers to orchestrators.
The Hidden Cost of Delegation
Here's what worries me: I'm forgetting how to manually debug complex issues.
A year ago, I could trace through a gnarly closure problem without AI help. Now? My first instinct is to paste into LLMs.
When AI fails (and it does), I've lost practice in the manual troubleshooting skills I used to have. I'm slower at the fundamentals, but better at understanding the bigger picture and how everything connects.
Are we trading depth for speed?
I don't know yet. But it's a question worth tracking. What if we're building a generation of developers who can orchestrate brilliantly but can't debug when the AI doesn't know the answer?
This isn't necessarily bad. Maybe future developers won't need manual debugging skills, just like we don't need to understand assembly language anymore.
But it's a trade-off we should make consciously, not accidentally.
The Questions Worth Asking
We're all running this experiment, whether we realize it or not. Here are the questions I think about:
- Are we becoming dependent? When AI is down, can we still solve hard problems?
- What's the success rate? How often do AI solutions actually work vs need significant fixes?
- Is there a quality gap? Code I write vs code AI writes - which breaks more in production?
- Do we understand what we ship? Can you explain the code AI wrote for you six months later?
- Is delegation a skill? Some developers get 10x more value from AI than others. Why?
I don't have answers. But I'm watching my own patterns shift, and it's fascinating.
The developers who figure this out first - who learn to delegate effectively while maintaining their fundamentals - will have a massive advantage.
About this post:
Based on personal experience building with AI for the past years, analyzing feedback from dozens of developers, and observing the shift in my own workflow patterns. The 1% debugger usage stat is real - from our analysis of 68 developers. The delegation trend is everywhere, but the data is still emerging.
What's your take? Are we becoming better developers or just better delegators? Both? Neither? I'm genuinely curious what patterns you're seeing.