AI Agents Didn’t Kill Software Engineering Best Practices

Why small changes and deliberate reviews still matter with AI-generated code.


With each advancement in AI models, software engineering gets declared dead again.

This time it’s AI agents. They write code, refactor code, review code, and hallucinate entire APIs that never existed. Recently, this has led to a quiet but dangerous thought creeping into some minds:

“Do we still need to follow the old rules?”

Short answer: yes.

Long answer: absolutely yes.

The Mythical One-Shot

Watching an AI agent one-shot a working prototype is a cool experience. You paste in a vague prompt, hit enter, and suddenly you have a backend service, a UI, and a README that confidently lies about how everything works.

One-shotting is a novelty, not a process. It skips the exact things that keep production systems alive: incremental changes, reviewability, and understanding what just landed in your codebase.

If you wouldn’t accept a 3,000-line pull request from a human, you shouldn’t accept one from an AI agent either.

Code Reviews Still Matter

AI agents change who writes the code, but they do not change how code should be reviewed.

Submitting massive, multi-file, multi-concern changes was bad practice before AI. It is still bad practice now, maybe worse.

Large reviews fail for predictable reasons:

  • Reviewers skim instead of review
  • Logical bugs hide in unrelated changes
  • Few people understand the final system

AI agents generate a lot of code, which means they can generate a lot of unreviewable code.

The fix is boring and effective: smaller changes

Velocity Isn’t Experience

A useful mental model is to treat AI agents like extremely fast junior engineers. They are productive and tireless, highly confident, and occasionally very wrong.

You wouldn’t ask a junior engineer to redesign your entire system in one pull request. You would break the work down, review each piece, and ask a lot of questions.

The same rule applies here.

Ask the agent to:

  • Modify an API endpoint
  • Refactor a component
  • Change one behavior at a time

Then review it.

Smaller Units, Better Outcomes

In practice, the best results come from reviewing the smallest feasible unit of change, whether that’s a single function, a single file, or a narrowly scoped refactor.

This leads to a better understanding of the changes, easier rollbacks, and faster feedback loops. It also gives the AI model clearer boundaries. When prompts are smaller in scope, you’ll receive better outputs.

Do Not Cite the Deep Magic to Me

I know this isn’t anything new, but using AI agents is not an excuse to abandon the practices that made software engineering teams effective in the first place.

If anything, those practices matter more now:

  • Small pull requests
  • Incremental, reviewable changes
  • Well-defined, narrowly scoped tasks
  • Clear accountability for what gets merged

The tools changed, but the fundamentals didn’t.