Debugging in an AI World

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

We previously set out a plan for how AI-assisted development could work:

  1. Develop, and refine a spec with a chat-based LLM.
  2. Review and approve the spec, checking high-risk areas carefully.
  3. Ask an AI agent to write an implementation plan.
  4. Review the plan, if necessary refine it, or cut it down.
  5. Ask the AI agent to implement the spec according to the agreed plan.
  6. Perform a code review, and iterate as required.

In the previous video we used Cursor to work on steps (3), (4) & (5). However, this plan assumed that the code that AI writes would just need a code review. It misses out a key step—check that the code works and fix it if it doesn’t.

  1. Develop, and refine a spec with a chat-based LLM.
  2. Review and approve the spec, checking high-risk areas carefully.
  3. Ask an AI agent to write an implementation plan.
  4. Review the plan, if necessary refine it, or cut it down.
  5. Ask the AI agent to implement the spec according to the agreed plan.
  6. Debug generated code.
  7. Perform a code review, and iterate as required.

We could go ahead and debug manually as we’re used to, but surely this is something that our AI agent could assist with?

Debugging with an AI Agent

We left Yesterday’s Weather in a state that built, but it immediately showed an error. We’re going to try an AI-first approach to debugging it:

See forum comments
Download course materials from Github
Previous: Completing Implementation Next: Fixing Errors