How to Use AI Coding Agents and Keep Your Mojo

Leverage AI without losing understanding, ownership, or confidence.

In today’s AI-augmented development landscape, it's tempting to treat generative models as magic bullets. But doing so risks accruing mental technical debt. This happens when building systems I don’t fully understand or feel ownership over.

Here’s how I'm trying to move forward so I can leverage the best support AI can provide while also keeping my skills sharp.

1. AI as a Tool, Not a Shortcut

From the Pragmatic Engineer Newsletter:

LLMs can feel like magic incantations, leading me to spend too long tweaking prompts instead of solving problems. A better approach: timebox AI use. Once the timebox expires, do it myself.

Bonus: keep a record of hard tasks to re‑test when new models are released.

2. Mental Tech Debt = Code You Can’t Explain

Here's what I'm thinking:

Only ask AI to do what you know how to do, or… let AI blaze the trail - but understand what it’s done before accepting the PR.

Mental tech debt happens when I merge code I can't critique or maintain. If I can’t explain it, I should expect it to cause trouble later.

3. Learning One Layer Deeper

To make the above rule practical:

  • Layer 0: This is the code I’m committing and it must be fully understood.
  • Layer 1: Dependencies or abstractions. These should be intelligible, even if not fully internalised.
  • Layer 2+: Lower-level primitives. I need only be aware of these.

4. Balancing Build Mode and Learn Mode

I'm thinking of dividing my usage into Build and Learn modes.

Build Mode

  • This is where I am prototyping and trying to get something working quickly. I'll likely throw this away later.
  • Or, I am building to deploy. In which case, I'll work in small pieces with the AI.

Learn Mode

  • Here, I use the AI to understand. The aim isn't to ship but rather to use AI as a study buddy.
  • I will mainly be doing much of the coding by hand.

Here are some smells that tell me I'm going too far in either direction:

Over-Reliance on AI (Build Mode) Over-Resistance to AI (Learn Mode)
Can't explain AI code Avoids AI even for boilerplate
Fear of debugging AI code Reinvents the wheel
Skips reading AI code Delays prototypes

5. Context-Aware Use

From a study presented on AI Engineer:

“AI effectiveness depends on the context: project maturity, language popularity, task complexity, codebase size, and context window.”
  • Mature, complex codebases → AI struggles; lean on manual review.
  • Popular languages/patterns → AI more reliable; trust but also understand the consequences one layer deeper.
  • High complexity tasks → Use AI for prototyping, then validate in parts before shipping.

6. Adjusting Expectations

Experienced engineers often rate AI coding assistants as fine - useful, but far from a 10× productivity unlock. The skepticism isn’t anti‑AI; it’s about understanding where the real work happens.

Coding is only one slice of developer productivity. The job also includes:

  • Ideation and requirements shaping
  • Negotiation and stakeholder alignment
  • Bug investigation and triage
  • Code review and knowledge sharing
  • Deployments, release management, and testing
Trying to make a 10‑minute commute 10× faster by driving at 600 km/h won’t work - fixed delays like traffic lights and intersections dominate the timeline. Software delivery has similar fixed costs outside of "typing code."

The net: assistants help, especially on boilerplate and exploration, but the biggest wins come from improving the entire workflow, not just the act of writing code.

My Rule of Thumb

If I can't verify it or explain it, I’m not merging it.

Sources