How I Use AI Without Writing Garbage Code
AI is a multiplier, not a replacement. It multiplies what you already know.
You're reviewing a pull request. The code works — tests pass, feature does what it's supposed to. But something's off. Utility functions recreated from scratch when they already exist in the codebase. API calls bypassing the existing service layer. Patterns that don't match the rest of the project. It's not bad code. It's disconnected code. The code doesn't know about the codebase it lives in. That's the tell.
There are two camps right now. Developers who paste AI output without reading it, shipping codebases that feel stitched together from a dozen different authors. And developers who refuse to use AI at all in 2026, which is just leaving speed on the table. Both camps are wrong.
There's a middle ground. Use AI aggressively, but own every line. This is how I do it.
The stack is deliberately simple:
- Claude Code CLI — the main tool, does the heavy lifting. Lives in the terminal, has full context of the project.
- Neovim — for when I want direct control. Surgical edits, quick reads, navigating code at my own pace.
- Lazygit — git workflow. Every commit goes through me.
That's it. No 10-tool AI stack. No fancy IDE plugin chains. No prompt engineering templates saved in Notion. The tool matters less than how you use it. I've seen developers get better results from a basic setup with a clear workflow than from an over-engineered stack they don't actually understand.
This is the core of it. Five steps, every time. It's not a rigid process — it's a rhythm that keeps AI output from drifting into disconnected code.
1. Explore
Before writing a single line, understand what exists. Use AI to explore the codebase — trace data flows, map dependencies, understand the service layer. Where do API calls live? What patterns does the project already use for form validation? What utilities exist? The goal is to build a mental model of the codebase so that anything you write fits into it, not next to it.
2. Plan together
Write the plan collaboratively. AI suggests architecture and patterns from its vast reference base — it's seen thousands of codebases, so it might catch tradeoffs you haven't considered. Maybe it suggests a state management approach you wouldn't have thought of, or flags that your planned file structure will cause circular dependencies. You evaluate, correct, rewrite. Architecture is collaborative, but you make the final call.
3. Verify the plan
Before any code gets written, confirm exactly which files get touched, created, or deleted. Walk through edge cases. What happens when the API returns an empty array? What if the user navigates away mid-submission? The plan should be so detailed that execution is mechanical. If you can't describe the implementation step by step, you're not done planning.
4. Execute
AI writes the code following the locked plan. At this point it's just a robot typing what you've already decided. The structure, the files, the patterns — all defined. AI fills in the implementation. This is where the speed comes from. Not from AI thinking for you, but from AI typing for you after you've already done the thinking.
5. Review and commit
You review every diff like it came from a teammate. AI never commits. You read it, you own it, you commit it through lazygit. If something looks off — a variable name that doesn't match the project's conventions, an import path that could be cleaner, a function that's doing too much — you fix it before it goes in. The commit is yours.
At work, I had to build a form. Not a simple contact form — a form with dozens of fields, deep conditional logic, and multiple levels of nested data. Editing flows on top of creation flows. Different validation rules depending on which combination of fields the user had filled. State management alone was a nightmare on paper.
The approach
I started by mapping out the entire form structure, every validation rule, and every error state in the plan. No code yet. Just a complete picture of what the form needed to do, what data it touched, and how the editing flow differed from creation.
Then I followed test-driven development. Wrote the tests first — defining expected inputs and outputs for every scenario. What happens when a required nested field is missing? What happens when the user switches from creation to editing mode? The tests described the correct behavior before any implementation existed.
AI generated the implementation following the locked plan. Tests caught bugs automatically. If AI's code didn't match expected output, it failed immediately — no guessing, no manual checking. The feedback loop was tight: AI writes code, tests verify it, fix what fails, move on.
What would've taken weeks shipped to production in days. Not because AI wrote magical code, but because the plan was airtight and the tests caught every deviation. The TDD angle is key — tests are the safety net. You define correctness before AI writes anything.
AI gets first shot at everything, including debugging. Most of the time, it does fine — trace the error, identify the cause, suggest a fix. But when it fails at debugging — and it does — that's where your skills and experience take over.
AI guesses at bugs. It suggests fixes without understanding the root cause. It'll swap out a dependency, restructure a function, add a null check — anything to make the error go away. But you've traced enough bugs to know where to actually look. You know that the issue isn't the function throwing the error — it's the data three layers up that was shaped wrong. That intuition comes from experience, not from a language model.
AI raises your floor. Your ceiling is still you.
AI is a multiplier. It multiplies what you already know. If you don't understand architecture, AI writes you a bad one. If you don't write tests, AI ships you bugs. If you can't debug, you're stuck when AI guesses wrong. The tool amplifies the developer — it doesn't replace the skill.
The developers who win aren't the ones using AI the most. They're the ones who were already good — and now they're fast too.
"AI doesn't make you a better developer. It makes you a faster one. The gap between good and bad engineers didn't shrink — it just moves at a higher speed now."