Conclusion

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

In this lesson you’ve seen how Claude Code can be used to tackle larger, gnarly issues associated with AI-assisted coding. It is able to use the full context of the project directory, edit files directly, and perform research on the web.

We used it to solve quite a challenging issue with our API client, and then to build a test suite and work to getting it to pass.

Wrapping this up into our AI-assisted development process, we end up with something that looks like this:

  1. Develop, and refine a spec with a chat-based LLM.
  2. Review and approve the spec, checking high-risk areas carefully.
  3. Ask an AI agent to write an implementation plan.
  4. Review the plan, if necessary refine it, or cut it down.
  5. Ask the AI agent to implement the spec according to the agreed plan.
  6. Debug generated code.
  7. Perform a code review, and iterate as required.
  8. As AI to create a test suite based on the spec and the app code.
  9. Run the test suite, and iterate with AI to ensure all tests pass.
  10. Review the generated test code to check applicability and for missing test cases.
  11. Release and iterate.

Although we’ve put testing at the end here, you could optionally build a Test-Driven Development (TDD) workflow with the same steps, but putting the tests up front.

The tools and models you use to build the app might change, but this strategy is likely to remain fairly steadfast as AI progresses. It’s quite likely that some of these steps will be rolled into the AI-coding process itself, but it will always be worthwhile taking the time to review the generated code.

See forum comments
Download course materials from Github
Previous: Building a Test Suite Next: Xcode Intelligence