Home About Blog Contact

Making the most of agentic coding

What to expect.

You've probably been there: excitedly asking ChatGPT to "just build this feature" only to watch it confidently generate code that doesn't compile, uses deprecated APIs, or solves the wrong problem entirely. The frustration is real.

Asking the agent to migrate your core Django app built over 10 years ago to Node.js and React might be more than it can do in one shot. This is when most people give up and write their LinkedIn post about the AI hype bubble. Agents are just as good at question and answer over large codebases as they are at solving leetcode. So take it a step further: ask the agent to come up with three different plans of attack for your problem or feature. You will benefit more from using agents if you let them do some discovery and planning before writing code.

Coincidentally this is the same process any engineer should take when starting on a feature. 

Use custom instructions 

Some examples include AGENTS.md, CLAUDE.md, and .github/copilot-instructions.md. Check your editor or agent documentation for details. 

These are used to inject important context at the start of every conversation (or sometimes every request). I suggest adding to your prompt files something small and light:

Project workflow: make change → run "make compile" → run "make test" → run "make test-integration" → all tests must pass

Tech debt rules: docs live in docs2/... not docs/..., always run ./login sh before ./start.sh

Git workflow: use the "gh" command to open prs and inspect GitHub actions with "gh run view", etc

Tightening the feedback loop

One of the best things that you can do to make your agentic coding more productive also turns out to be super nice to have in general: tests.

Thankfully LLMs are pretty good at writing unit tests and they help tighten the feedback loop a lot. 

Speeding up and resolving flakey tests is a really good best practice and will help you and your agent build the confidence needed to trust the changes it's making.

Context management and memory

Agents don't have perfect memory across long sessions. Break complex tasks into smaller, focused conversations or use project-specific documentation to maintain context. Consider creating a "working notes" file that the agent can reference and update as it progresses through multi-step tasks.

Version control best practices

Commit frequently when working with agents. Create checkpoint commits before major changes so you can easily revert if the agent goes off track. Use descriptive commit messages that indicate AI-assisted work for future reference. 

Hint: add this to your custom system prompt: "commit after every major change is complete, do not push"

Know when to intervene

Learn to recognize when an agent is stuck in a loop or making the same mistake repeatedly. Sometimes a small human nudge or clarification can get things back on track faster than letting it struggle. Don't be afraid to course-correct or take manual control when needed.

Leverage the agent's strengths

Agents excel at boilerplate generation, test writing, documentation, code analysis, and explaining unfamiliar codebases. Use them for these tasks while keeping the high-level architecture decisions and complex business logic to yourself.

Set up your environment for success

Ensure your development environment has good error messages, fast test suites, and clear build processes. The easier it is for the agent to understand what went wrong, the faster it can course-correct.

If your project is using a semi obscure or newer library, download the relevant docs in a subfolder that it can access or configure an MCP server to give it the tools necessary to look up the docs. This helps eliminate hallucinations of non-existent methods and function args. 

TLDR

Calibrate expectations: Treat agents like junior developers who need clear guidance and planning - helps avoid frustration when complex tasks fail

Custom instructions: Inject project-specific workflows and rules upfront - saves time correcting the same mistakes repeatedly

Tighten feedback loops: Invest in good tests and fast builds - gives agents (and you) confidence to make changes quickly

Manage context: Break big tasks into focused sessions with documentation - prevents agents from losing track of progress

Version control discipline: Commit frequently with descriptive messages - makes it easy to backtrack when things go sideways

Know when to intervene: Recognize when agents are stuck in loops - prevents wasted time on repetitive failures

Play to strengths: Use agents for boilerplate, tests, and analysis while keeping architecture decisions human - maximizes productivity for both

Environment setup: Ensure clear error messages and fast feedback - helps agents self-correct instead of flailing