AI Did Not Replace Debugging — It Changed It
There is a popular narrative right now that AI will soon replace programmers entirely. After spending several hours debugging a real-world Next.js issue together with ChatGPT, I think the reality is much more interesting.
What happened was not “AI solved everything automatically.” And it also was not “AI is useless.”
The actual process looked more like a collaboration between:
- a programmer,
- an AI assistant,
- and traditional search/documentation tools.
Surprisingly, each part turned out to be important.
The Problem
Our Next.js application suddenly started failing with:
ReferenceError: boolean is not defined
The error came from generated client chunks inside Turbopack.
Not from our code. Not from TypeScript. Not from React components.
Just a completely bizarre runtime failure somewhere deep inside generated bundles.
Google searches were almost useless:
- the error text was too generic,
- stack traces pointed into transformed chunks,
- and there were no obvious GitHub issues matching the exact symptom.
This is exactly the type of debugging case where traditional search starts breaking down.
Where AI Was Extremely Useful
ChatGPT was surprisingly effective at:
- narrowing down hypotheses,
- recognizing ecosystem patterns,
- identifying likely problem areas,
- and connecting unrelated symptoms.
For example, it quickly recognized that:
- the issue was probably not related to browsers,
- not related to our application logic,
- and likely connected to Turbopack + ESM + markdown tooling.
It also correctly identified the suspicious dependency chain:
react-markdown
-> rehype
-> property-information
That dramatically reduced the search space.
Instead of randomly changing code for hours, we could focus on testing bundler behavior.
But AI Was Not Enough
At the same time, the process absolutely still required a human developer.
We had to:
- inspect generated chunks,
- run experiments,
- change configs,
- compare Webpack vs Turbopack behavior,
- validate assumptions,
- and continuously test hypotheses.
Even more importantly: the AI was not always correct.
At one point it confidently suggested disabling Turbopack through Next.js config options that did not actually exist for our setup.
That forced us to:
- verify documentation,
- search manually,
- and continue experimenting.
Eventually we discovered the correct solution ourselves:
const app = next({
dev,
webpack: true,
})
Ironically, this mistake actually highlighted the real value of the workflow.
AI was not acting like an oracle. It was acting more like a very fast reasoning partner.
The Real Pattern Emerging
What happened here felt less like “AI replacing programmers” and more like a new debugging model emerging:
- AI helps generate and prioritize hypotheses.
- The developer validates them experimentally.
- Search engines and documentation provide ground truth.
- Together they dramatically accelerate problem solving.
None of the three pieces alone would have solved this efficiently.
Without AI: we probably would have spent much longer exploring irrelevant directions.
Without the developer: the wrong assumptions would never have been corrected.
Without documentation and search: some implementation details would remain unverifiable.
Why This Matters
A lot of current discussions frame the future as:
- humans vs AI,
- or search vs AI.
But real engineering work increasingly looks like:
- humans + AI + search/documentation systems.
Each one compensates for the weaknesses of the others.
AI is good at reasoning across incomplete information. Search engines are good at retrieving authoritative sources. Developers are good at experimentation, architecture understanding, and reality checks.
This debugging session was a good reminder that modern software engineering is becoming less about memorizing answers and more about navigating systems of reasoning, verification, and experimentation efficiently.
And honestly, that may be more interesting than simple replacement narratives.