<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Vladyslav Kravchenko]]></title><description><![CDATA[Vladyslav Kravchenko]]></description><link>https://vladkrv.com</link><generator>RSS for Node</generator><lastBuildDate>Mon, 11 May 2026 17:22:46 GMT</lastBuildDate><atom:link href="https://vladkrv.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Code Generation and Review with AI in Complex Repositories]]></title><description><![CDATA[In this article, you'll read about meaningful code generation and review in complex repositories with AI. As an AI coding assistant, I chose GitHub Copilot, but what you'll learn is easily applicable ]]></description><link>https://vladkrv.com/code-generation-and-review-with-ai-in-complex-repositories</link><guid isPermaLink="true">https://vladkrv.com/code-generation-and-review-with-ai-in-complex-repositories</guid><category><![CDATA[AI]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[github copilot]]></category><category><![CDATA[code review]]></category><category><![CDATA[developer productivity]]></category><category><![CDATA[Developer Tools]]></category><dc:creator><![CDATA[vladkrv]]></dc:creator><pubDate>Mon, 11 May 2026 16:45:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/24eb1b27-c056-472c-b7e8-19e2bcf00cfc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, you'll read about meaningful code generation and review in complex repositories with AI. As an AI coding assistant, I chose GitHub Copilot, but what you'll learn is easily applicable to any other coding assistant.</p>
<p>My name is Vlad. I'm a software engineer at Dynatrace with degrees in AI and cybersecurity. I'm also a public speaker, and I've got great things to share.</p>
<h1>Why?</h1>
<p>Most of you have seen fancy demos where AI does impressive things when working from scratch, such as prototyping, toy examples, or cherry-picked cases. But what about serious engineering work in big, complex repositories?</p>
<h2>Expectation vs Reality</h2>
<img src="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/ae1809e2-4d97-45e0-b9ec-f646adb2a228.png" alt="A two-paths meme showing a cartoon person labeled ME standing at a fork in a dirt road. The left path leads toward a bright, pristine white castle under a sunny sky, with text superimposed reading AI IN DEMO. The right path leads toward a dark, spooky, ruined castle under a dark, stormy sky with purple lightning, with text reading AI IN REAL PROJECT." style="display:block;margin:0 auto" />

<p>If you've tried to apply AI to your daily work, you may have faced issues that led you to think it's not reliable enough yet.</p>
<h2>The Tipping Point</h2>
<img src="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/fe845b07-8dfa-4411-af20-52a8acc683b4.png" alt="A meme featuring SpongeBob SquarePants smiling enthusiastically with his hands raised, forming a sparkling rainbow arc between his hands. Bold white text with black outlines is superimposed over the top of the image, reading CLAUDE OPUS on the top line and GEMINI PRO on the bottom line." style="display:block;margin:0 auto" />

<p>I had a similar experience. But at a certain point, everything changed. Newer AI models and tools were released, and my own AI knowledge and skills grew to the point that I'm faster with AI than without it. I don't even have to compromise on quality. This was the breaking point: with sufficient guidance, AI can now tremendously aid you in your day-to-day engineering. For the last year, I've been using AI coding assistants daily, learning and iterating. Regardless of your stance on AI, it's now reliable and coherent enough that I encourage everyone to get on board and benefit from it in engineering. And don't worry: AI won't replace you, but a person who uses it well can.</p>
<h1>What?</h1>
<p>In this article, I'll show you exactly how to configure GitHub Copilot in your repository and get meaningful code generation and review. I'll share what I learned and provide a getting-started guide to help you create an initial set of Copilot instructions. While I use GitHub Copilot as my example, the principles defined here are transferable to any AI coding assistant. So let's jump right into it.</p>
<h1>How?</h1>
<h2>Problem Statement</h2>
<p>First, I want you to have a basic conceptual understanding of how AI works, based on the following simple diagram. Given a specific INPUT, the AI processes it into a specific OUTPUT:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/672bee01-1de0-408f-bbe9-9e3ee3898ab9.png" alt="A diagram illustrating a basic AI process. On the left is a green oval labeled INPUT. In the center is a tangled, chaotic scribble of blue lines labeled AI. On the right is a purple oval labeled OUTPUT. A single continuous blue line travels from the INPUT oval, gets heavily tangled in the AI section, and emerges as a single arrow pointing into the OUTPUT oval." style="display:block;margin:0 auto" />

<p>However, AI is a nondeterministic black box, so we can get different outputs for the same inputs:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/5bfa4acd-e856-4e11-a5d2-6536515b8ba8.png" alt="A diagram illustrating generative AI possibilities. It features the same INPUT (green oval), AI (tangled blue scribble), and OUTPUT (purple oval) layout as the first image. A single blue line enters the AI tangle from the INPUT oval. However, instead of a single result, four separate blue arrows emerge from the AI tangle, pointing to different, scattered locations within the OUTPUT oval." style="display:block;margin:0 auto" />

<p>And this is precisely the problem we have to address to get reliable enough output for every task we do.</p>
<p>While it's not possible to solve this problem completely, the next best thing we can do is shrink the output space by guiding the AI with instructions, and by doing so, we'll get a more reliable output:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/06bf3389-7f1f-4783-95d7-29617c5ebfca.png" alt="A diagram illustrating AI output selection or prompting. It builds on the second diagram with the same INPUT, AI, and OUTPUT sections, where one line enters the AI tangle and four arrows emerge into the OUTPUT oval. Inside the purple OUTPUT oval, there is now a smaller, lighter purple circle specifically drawn around just one of the four arrowheads, highlighting a specific or desired result among the various generated possibilities." style="display:block;margin:0 auto" />

<p>In engineering, a preferred output is one that aligns with your codebase, follows established guidelines, avoids banned patterns, and so on.</p>
<h2>Solution: Available Tools</h2>
<p>To shrink the output space, we must guide the AI, and for that we have multiple tools.</p>
<p>While the complete list of tools is extensive and may look intimidating, I suggest focusing on the most deterministic tools, as our goal is to reduce the output space. The first three tools are our primary building blocks for code generation and review:</p>
<ul>
<li><p><strong>Always-on Instructions</strong> - House rules. Apply always while you're in the house (house ~ repository: tech stack, architecture, documentation standards)</p>
</li>
<li><p><strong>File-based Instructions</strong> - Manuals for the house's appliances. Apply while you interact with a specific appliance (appliance ~ file type: component file, test file, style file, etc.)</p>
</li>
<li><p><strong>Prompt Files (Slash Commands)</strong> - Recipes. Applied on demand (recipes ~ algorithms: PR review, a11y review, security audit)</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/a617058f-0c68-4958-8467-2860246795a6.png" alt="Table of tools available to shrink the output space." style="display:block;margin:0 auto" />

<details>
<summary>Screen reader-friendly (text) version of the table</summary>
<table style="width:977px"><colgroup><col style="width:120px"></col><col style="width:97px"></col><col style="width:238px"></col><col style="width:214px"></col><col style="width:308px"></col></colgroup><tbody><tr><td><p>Tool</p></td><td><p>Think of it as…</p></td><td><p>What it does</p></td><td><p>When to use it</p></td><td><p>Why this, not another?</p></td></tr><tr><td><p>Always-on Instructions</p></td><td><p>🏠 House rules</p></td><td><p>Project-wide coding standards and conventions are automatically included in every AI interaction</p></td><td><p>Naming conventions, architecture patterns, banned libraries, and security requirements</p></td><td><p>Unlike file-based instructions, these are never conditional - they define how to behave <em>everywhere</em> in the repo</p></td></tr><tr><td><p>File-based Instructions</p></td><td><p>📖 Appliance manuals</p></td><td><p>Rules that activate only when the AI works on files matching a glob pattern or task description</p></td><td><p>Guide for <em>.tsx components and use.ts </em>hooks, test conventions for Playwright <em>.pw-test.ts, and for .test.tsx </em>unit tests</p></td><td><p>Unlike always-on, these activate only for matching files - keeping context lean and rules targeted</p></td></tr><tr><td><p>Prompt Files (Slash Commands)</p></td><td><p>🧑‍🍳 Step-by-step recipes</p></td><td><p>Reusable task templates you invoke on demand in chat via /command</p></td><td><p>Scaffolding a component, preparing a PR, running &amp; fixing tests</p></td><td><p>Unlike instructions (passive guidance), prompts are active - you run them like a command to trigger a specific workflow</p></td></tr><tr><td><p>Custom Agents</p></td><td><p>🎭 Specialist roles</p></td><td><p>Distinct AI personas with their own tools, instructions, and model preferences</p></td><td><p>Security reviewer, planner, solution architect, a11y specialist</p></td><td><p>Unlike prompts (single task), agents change <em>who</em> the AI is - restricting tools and shaping behavior for an entire session</p></td></tr><tr><td><p>Agent Skills</p></td><td><p>🧰 Trade certifications</p></td><td><p>Portable capability bundles (instructions + scripts + resources) loaded on-demand</p></td><td><p>Testing workflows, deployment processes, and debugging procedures</p></td><td><p>Unlike instructions (text files), a skills directory can bundle code and examples; unlike agents (identities), skills are <em>capabilities</em> any agent can use. Open standard across tools</p></td></tr><tr><td><p>MCP Servers</p></td><td><p>🔌 Utility connections</p></td><td><p>Plug external APIs, databases, and services into the AI via the Model Context Protocol</p></td><td><p>Querying a database, fetching from Jira, interacting with a browser via Playwright</p></td><td><p>Unlike skills (local knowledge), MCP connects to live external systems that the AI cannot reach on its own</p></td></tr><tr><td><p>Hooks</p></td><td><p>⚡ Circuit breakers</p></td><td><p>Shell commands that execute automatically at agent lifecycle points (before/after tool use, session start/stop)</p></td><td><p>Block dangerous commands, auto-format after edits, and audit all tool invocations</p></td><td><p>Unlike everything above (guidance for the AI), hooks are deterministic code - they execute <em>regardless</em> of how the AI interprets your prompt</p></td></tr><tr><td><p>Language Models</p></td><td><p>🧠 Engine selection</p></td><td><p>Choose different AI models optimized for speed, reasoning depth, or specialized tasks like vision.</p></td><td><p>Fast model for quick refactors, powerful model for architecture decisions, vision model for UI work.</p></td><td><p>Not a behavioral instruction - this is about picking the right brain for the job.</p></td></tr></tbody></table>
</details><details>
<summary>Table sources</summary>
<p>All information in the table above is based on the official VS Code GitHub Copilot documentation:</p><p><a target="_self" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://code.visualstudio.com/docs/copilot/customization/overview" style="pointer-events:none">Customize AI in VS Code - Overview &amp; Quick Reference</a></p><p><a target="_self" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://code.visualstudio.com/docs/copilot/customization/custom-instructions" style="pointer-events:none">Custom Instructions (always-on &amp; file-based)</a></p><p><a target="_self" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://code.visualstudio.com/docs/copilot/customization/prompt-files" style="pointer-events:none">Prompt Files (slash commands)</a></p><p><a target="_self" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://code.visualstudio.com/docs/copilot/customization/custom-agents" style="pointer-events:none">Custom Agents</a></p><p><a target="_self" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://code.visualstudio.com/docs/copilot/customization/agent-skills" style="pointer-events:none">Agent Skills</a></p><p><a target="_self" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://code.visualstudio.com/docs/copilot/customization/mcp-servers" style="pointer-events:none">MCP Servers</a></p><p><a target="_self" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://code.visualstudio.com/docs/copilot/customization/hooks" style="pointer-events:none">Hooks</a></p><p><a target="_self" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://code.visualstudio.com/docs/copilot/customization/language-models" style="pointer-events:none">Language Models</a></p>
</details>

<p>When trying out <strong>custom agents</strong>, they had no measurable impact on the task of general code generation and review. Overall, custom agents are more about scoping the regular "all-knowing" agent to a perspective, with a specific set of tools and skills. It means you won't need them, but at a certain point, as your AI customization grows, you may want to define a custom coding/review agent simply to scope the tools loaded. Furthermore, as of now, <a href="http://vladkrv.com">GitHub actually recommends using custom instructions for code generation and review</a>.</p>
<p><strong>MCP servers</strong> can be highly effective, but it largely depends on what MCP server you use and how it's implemented. Component library or design system MCPs, for example, are very well worth your attention if you're doing frontend. Before using any MCP, please make sure it's trustworthy, as malicious MCPs do exist. Also, in most cases, when both an MCP and a CLI are available, the latter is preferable.</p>
<p><strong>Hooks</strong> are a relatively new feature that recently came out of preview. They look very promising for our task of making the output space more deterministic, so I highly recommend looking into them once you've established the foundation using custom instructions and prompt files. They're especially useful for putting security measures in place, for example, to deterministically forbid reading <code>.env</code> files.</p>
<p>Lastly, the <strong>language model</strong> you select matters a lot. Generally, the latest Claude Opus is best for engineering, and the latest Gemini Pro is a good all-around option. As a rule of thumb, the higher the model's rate, the more capable it is. Keep in mind that it's wise to use a low-rate model for low-effort tasks to keep your quota usage balanced.</p>
<p><em>As of now, GitHub Copilot has request-based quota. This means that token count is irrelevant. Claude Code has token-based quota. Therefore, depending on your tools' billing model, you may need to optimize for different things - saving requests versus saving tokens.</em></p>
<h1>Example setup</h1>
<img src="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/3edc4c9e-de9b-400f-9e49-310a50dcfc9d.png" alt="A meme featuring the character Gru from Despicable Me, wearing his signature striped scarf and smiling slightly while aiming a silver handgun directly at the viewer. Bold white text with black outlines reads DO NOT MAKE MISTAKES at the top and PLEASE at the bottom." style="display:block;margin:0 auto" />

<p>Unfortunately, there is no silver bullet solution; the example above didn't work even when asking nicely or threatening the AI coding agent.</p>
<p>Hence, we have to approach the problem as engineers. Based on my experience, here is a foundational setup I'd recommend for a complex frontend repository (~100k lines + microservices):</p>
<ul>
<li><p><strong>1 global always-on instruction</strong> - your "house rules" that apply to every file, with every prompt</p>
</li>
<li><p><strong>N scoped instruction files</strong> - one for each distinct file type pattern you have (components, hooks, tests, styles, etc.)</p>
</li>
<li><p><strong>1 prompt file</strong> - covering your most repetitive task (for us it's reviewing code)</p>
</li>
</ul>
<pre><code class="language-plaintext">.github/
├── copilot-instructions.md         ← Global (always active)
├── instructions/
│   ├── components.instructions.md   ← src/**/*.tsx, use*.ts
│   ├── state-management.instructions.md ← *State.ts files
│   ├── styling.instructions.md          ← *.css.ts files
│   ├── unit-testing.instructions.md     ← *.test.ts(x)
│   ├── e2e-testing.instructions.md      ← e2e-tests/**
│   ├── integration-testing...md      ← integration-tests/**
│   └── accessibility.instructions.md  ← src/**/*.tsx, tests
└── prompts/
    └── pr-review.prompt.md       ← /pr-review slash command
</code></pre>
<p>The exact number and names of scoped instruction files depend on your project. The key idea is: <strong>one instruction file per distinct file type pattern</strong>. If your AI needs to behave differently when working on a component file versus a test file versus a state file, give each its own instruction file with a matching glob pattern.</p>
<h2>Why this setup?</h2>
<p>You may wonder:</p>
<blockquote>
<p>Why this and not Custom Agents + Custom Skills?</p>
</blockquote>
<p>It's a common question, and it's great if you ask it; I love when people question things. There are two main reasons to use Custom Instructions over Custom Agents + Custom Skills for Code Generation and Review.</p>
<h3><strong>Loading Mechanism</strong></h3>
<p><strong>Skills</strong> descriptions are always in memory. Imagine having a stack of books describing a skill - all of them have a brief summary on the cover - "How to wash a cat", "How to peel a banana", and so on. The decision of which books to open and follow is fully up to the agent. This nondeterministic behavior hinders our efforts to shrink the output space.</p>
<p><strong>Custom instructions</strong> are loaded on demand. Global instructions <code>.github/copilot-instructions.md</code> are loaded <em>always</em> when you're in the repository. File-based instructions, <code>.github/instructions/&lt;type&gt;.instructions.md</code>, are loaded only when the specified <code>applyTo</code> glob matches. Getting back to the example with skills: if you hold a cat, you'll deterministically get instructions on how to wash it; if you hold a banana, you'll deterministically get instructions on how to peel it.</p>
<p>By choosing Custom Instructions over Skills, we opt for a more deterministic loading mechanism, and hence we get a more reliable output. This is the most important reason.</p>
<h3>Storing Mechanism</h3>
<p>As you may have noticed in the examples above:</p>
<ul>
<li><p><strong>Skills</strong> are a stack of books you always carry around, while</p>
</li>
<li><p><strong>Custom Instructions</strong> are only given when you need them</p>
</li>
</ul>
<p>Over time, as you get more and more skills, AI context will get more and more bloated, and the nondeterministic factor will increase. Your stack of books will get <em><strong>heavy</strong></em>. Your agent will struggle with selecting a correct skill when there are dozens of them and some are even overlapping.</p>
<p>To somewhat mitigate the issues that come with the growth of your AI customization setup, you'd need to add Custom Agents as wrappers for a set of skills and tools.</p>
<p>Overall, <strong>Custom Agents + Custom Skills</strong> are much harder to maintain than a set of <strong>Custom Instructions per distinct file type</strong>, and when it comes to the general task of code generation and review, they are ultimately the wrong solution when you can use custom instructions.</p>
<h1>Setting up your own AI Customization</h1>
<p>Now that you know how your foundational setup should look, let's talk about how exactly you should define it. As we all work with different technologies, it makes no sense for me to show you specific code snippets or anything like that. Instead, we'll focus on principles.</p>
<h2>Principles that make instructions effective</h2>
<p>After several months of daily use and iteration, I've distilled what makes instructions effective into a number of principles. These are not GitHub Copilot-specific - they apply to any AI coding assistant that supports customization. And since the AI world changes every week, I designed these principles so they stay relevant over time.</p>
<h3>Core Principle: Always Reduce Output Space</h3>
<p>This is the core rule behind every decision you make when customizing AI. Among the many customization options available - instructions, agents, skills, prompt files, MCP servers - always select the one that reduces the output space the most.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/d3219f18-9bd7-4639-902b-658d0cce83d9.png" alt="A diagram illustrating the process of narrowing down AI results. It features three sections: a green INPUT oval on the left, a tangled blue scribble labeled AI in the center, and an OUTPUT section on the right. A line travels from the input, becomes tangled in the AI section, and emerges as four separate arrows. The OUTPUT section consists of three nested, progressively smaller circles in light blue, light orange, and pink, visually representing a reduced output space. One specific arrow points directly into the smallest, innermost pink circle, highlighting a highly targeted result, while the other three arrows point to the broader, less defined areas of the outer circles." style="display:block;margin:0 auto" />

<p>This works both ways. When you pick a tool that constrains AI behavior tightly (like custom instructions, which are always loaded and always read), the output becomes consistent and reliable. When you pick a tool that expands it (like custom agents with custom skills, which the model may or may not follow depending on context), the result quality varies significantly - and it may not be reliable enough for daily use.</p>
<p><strong>A concrete example</strong> (recap from the Why this setup? section above):</p>
<p><em>GitHub Copilot offers both custom instructions and custom agents with skills. Instructions are stored as markdown files, loaded automatically based on glob patterns, and applied deterministically - the AI reads them every time. Custom agents and skills, by contrast, define a persona and a set of capabilities, but the model follows them at its own discretion. In practice, after long and thorough A/B testing, I found that custom instructions perform better for code generation and review.</em></p>
<blockquote>
<p>Every other principle is an application of this rule.</p>
</blockquote>
<h3>P2: Start with the big picture</h3>
<p>Imagine that the AI agent is a contractor whose memory is wiped every time you assign a task. Your AI customization would serve as a persistent memory for your contractor and your agent. Any time a session starts, the agent can read your customization files to save the effort of figuring out what is going on.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/59f45e3c-0bb8-4940-927e-1ec87fad67cc.png" alt="An inverted pyramid diagram illustrating a top-down progression from broad concepts to narrow details. The pyramid consists of five horizontal color-coded layers, narrowing from top to bottom. The widest top layer is purple and reads Project purpose and tech stack. Below it is a blue-purple layer reading Architecture, followed by a blue layer reading Patterns, and a teal layer reading Anti-Patterns. The narrowest green tip at the bottom connects via a dotted line to the text Specific rules on the right." style="display:block;margin:0 auto" />

<p>Your global instruction file should open with what the project <em>is</em> - a one-liner on purpose, tech stack, and architecture style. Then narrow into coding standards, patterns, and anti-patterns.</p>
<p><strong>Why it works:</strong> it gives the AI the same context tree you'd give a new team member during onboarding. Without the big picture, the AI has to guess the project's nature from the files it reads, and it often guesses wrong. And even when it guesses right, it may still make sense to save repeated guessing by defining a line in instructions.</p>
<h3>P3: Keep iterating over your setup</h3>
<img src="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/ecac0331-c0ef-4c9a-adfb-f98f91ae7ae1.png" alt="A line graph titled Reliability. The chart features a y-axis representing percentages from 58% to 74% and an x-axis numbered from 0 to 25. A blue line with dotted data points tracks progress from point 1 to point 20. Starting at 60%, the line shows a clear, overall upward trend despite a few minor dips, eventually peaking near 72%. The graph visually demonstrates how continuous iteration leads to a steady increase in reliability over time." style="display:block;margin:0 auto" />

<p><em>Note: the chart is illustrative only and does not represent real measured data.</em></p>
<p>We already defined memory for the agent. Next, make sure that this memory - your AI customization - grows over time, the same way a new colleague's knowledge grows during their work. You shouldn't treat instructions as a set-once-and-forget artifact.</p>
<p>The first version of your instructions will be imperfect, and that's fine. What matters is that every time the AI makes a repeatable mistake, you recognize it as a signal: there's a missing rule, a vague rule, or a wrong rule. Add it, clarify it, fix it. Over weeks and months, your instructions become increasingly precise, and the AI's output becomes increasingly reliable.</p>
<p>This is fundamentally different from linting rules or CI checks, which are static once written. AI instructions are a living document that evolves with the codebase and the team's understanding of how to guide the model.</p>
<p>Another benefit of iterating is a clear progression. If you were to <em>copy-paste a best-practice setup</em>, it may work well, but not as well as the one tailored over time, and also not as cost-effectively, because all the redundant customization consumes tokens.</p>
<p><em>Note that it would also rob you of the little clarity you could get from A/B testing while iterating on your own setup. What you copy may work, but you'll have little to no insight into what is important and what is not.</em></p>
<h3>P4: Establish a reflection process</h3>
<p>We have established a persistent memory and a continuously growing knowledge base. Next, make sure that this memory and its quality grow and stay high. For that, we need to establish a reflection process.</p>
<p>Let the AI self-reflect on the mistakes it makes. When something goes wrong, let the AI analyze what happened, define actionable adjustment points, and present them to you for verification. Once you approve, the agent applies the adjustment to the instruction files directly.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/0d35d6ac-4224-4f96-9fe5-7f5b716699c5.png" alt="A flowchart illustrating a continuous feedback loop between an AI and a user. The cycle flows downwards through five color-coded rectangular steps before looping back to the beginning: Blue box (top): a robot icon next to the text AI works on code. An arrow points down to... Orange box: a magnifying glass icon next to the text Detects discrepancy or mistake. An arrow points down to... Purple box: a lightbulb icon next to the text Suggests actionable fix to instruction files. An arrow points down to... Green box: a user silhouette icon next to the text User reviews and approves. An arrow points down to... Teal box (bottom): a pencil icon next to the text Agent applies adjustment. A long curved arrow leads from the bottom teal box all the way back up to the top blue box, completing the cycle." style="display:block;margin:0 auto" />

<p>This same reflective process can automate customization maintenance more broadly. Whenever the AI detects inconsistencies between the codebase and the instructions, such as a deprecated pattern that instructions still prescribe, a new convention that instructions don't mention, or redundant rules across files, it can suggest changes and apply them after your approval.</p>
<p>You'll see suggestions like: <em>"Your instructions say to use pattern A, but 15 out of 16 files in the codebase use pattern B. Should I update the instructions?"</em> Just approve the fix, and your instructions stay current.</p>
<p>The result is a feedback loop: the AI helps maintain its own guidance, keeping instructions aligned with reality without significant manual effort.</p>
<h3>P5: Encode team processes, not just code patterns</h3>
<p>Your instruction files aren't limited to code style. AI is a nondeterministic tool, so you have to assign it according to tasks. For example, let's consider the task of a review. Within your AI customization, encode things like:</p>
<ul>
<li><p>Commit/changeset message format rules</p>
</li>
<li><p>PR description templates (what a good PR description includes)</p>
</li>
<li><p>Versioning conventions (how to determine patch vs. minor from the branch name)</p>
</li>
<li><p>Review checklists (code quality, security, accessibility, testing, documentation)</p>
</li>
</ul>
<p>And for humans, leave the high-stakes, high-level tasks such as architecture, design, business logic correctness, and knowledge sharing.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/e3bd0e4a-dfa6-4e6a-945f-59e11a3b7623.png" alt="A side-by-side comparison diagram illustrating the division of labor in code reviews. On the left, a light purple panel titled AI Reviewer Handles (featuring a robot icon) lists four automated tasks: banned patterns, consistency with the codebase, commit or changeset message format, and PR description completeness. On the right, a light blue panel titled Human Reviewer Focuses On (featuring a human silhouette icon) lists four higher-level, cognitive tasks: architecture decisions, business logic correctness, design trade-offs, and knowledge sharing." style="display:block;margin:0 auto" />

<p>These are exactly the kind of things that are tedious for reviewers and trivial for the AI to enforce, if you tell it how.</p>
<p><strong>Why it works</strong>: automates the mechanical parts of review, freeing humans to focus on architecture and logic. The AI becomes a reliable first pass that catches what humans often overlook.</p>
<h3>P6: Encourage AI to ask questions instead of assuming</h3>
<p>Add a section telling the AI to ask clarifying questions before starting work, and to ask again whenever new uncertainties arise during work.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/68c0a172-e9fc-4119-9fa5-27a766e3d96f.png" alt="A side-by-side comparison of two AI interaction workflows. The left panel (green) shows a successful four-step downward flow: user writes a one-minute rough ask, AI asks three clarifying questions, user answers briefly, AI executes correctly. The right panel (red) shows a frustrating four-step downward flow: user spends 15 minutes writing a detailed prompt, AI executes, wrong result, redo." style="display:block;margin:0 auto" />

<p>This saves you enormous time on prompt writing. Instead of considering every possible detail and spending 15 minutes writing a comprehensive prompt, you can write a rough one-minute ask. The AI will explore and answer all it can, and then ask questions about everything else - all within one request, so that your quota is not needlessly consumed with back-and-forth.</p>
<p><strong>Why it works</strong>: reduces ambiguity without human effort upfront. The AI becomes a collaborator that surfaces the right questions rather than a tool that silently assumes. Assumptions lead to mistakes. Mistakes lead to another try, another loop, and that costs you much more than the extra time you pay for answering a couple of questions, both in tokens and engineering time.</p>
<h3>P7: Define banned patterns, refer to golden files for approved replacements</h3>
<p>AI models come with biases from their training data - they'll default to patterns they've seen most often, which may not be what your project uses. To counteract this, explicitly define what's banned and point to your golden files (reference implementations) as the source of approved replacements.</p>
<p>Don't just say <code>don't use X</code>. Say <code>use Y instead of X, see golden-component.tsx for the approved pattern</code>. A table format works well for the bans:</p>
<table>
<thead>
<tr>
<th>Banned</th>
<th>Use instead</th>
</tr>
</thead>
<tbody><tr>
<td>Pink/Purple/Indigo Gradients</td>
<td>Semantic, Brand-Specific Hex Codes</td>
</tr>
<tr>
<td><code>framer-motion</code> Overkill</td>
<td>CSS Transitions or Purposeful Motion</td>
</tr>
<tr>
<td>Single-File Monoliths</td>
<td>Modular Component Architecture</td>
</tr>
<tr>
<td><code>useState</code> Hell</td>
<td>Themed/Customized Components</td>
</tr>
</tbody></table>
<p><em>Note: the table is for illustrative purposes, you would have to be more specific for a real customization file to be effective.</em></p>
<p>For golden files, point to 1-2 real files per file type pattern that exemplify the established patterns. "When in doubt, follow this example." The AI can read these at any time to see how things are done in practice.</p>
<p>And, of course, you can also refer to golden files in the banned patterns table.</p>
<p><strong>Why it works:</strong> banning alone leaves a vacuum - the AI will either ignore the ban or invent a wrong alternative. Golden files fill the vacuum with a concrete example. Together, they help the AI unlearn biased patterns and learn your project's conventions.</p>
<h3>P8: Require self-validation before "done"</h3>
<p>Tell AI to run typecheck, lint, and relevant tests before claiming a task is complete. Something like:</p>
<blockquote>
<p>"Before stating that a task is complete, ALWAYS run validation: typecheck, lint, and unit tests for affected code. All commands must pass before considering the task done."</p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/6a01a1ccfca21b0d4bf5efaa/6407e97e-112d-4041-ae50-afab83173da7.png" alt="A side-by-side comparison flowchart showing two workflows for AI code generation. The left side, outlined in red and labeled X and Without P8, shows a manual, repetitive loop: AI generates code, says done, you run tests, tests fail, back to AI (which loops back to the start via a dotted arrow). The right side, outlined in green and labeled With P8, shows an automated, successful flow: AI generates code, runs typecheck, runs lint, runs tests, fixes failures (if any), says done, you review working code." style="display:block;margin:0 auto" />

<p>This one rule changes the experience more than anything else. Instead of the AI handing you broken code and saying "done," it now self-validates every time and keeps working until all checks are green.</p>
<p><strong>Why it works:</strong> it shifts the first round of verification cost to the AI. You still review the output, but it arrives in a working state.</p>
<h2>Demo</h2>
<p>Now it's time for practice. You'll see multiple demos below. The demo videos, unfortunately, don't include commentary - for that you'd need to attend one of my live talks. However, the demos are self-explanatory, and I'll describe the overall ideas for each of them here.</p>
<h3>How to create AI Customization files</h3>
<p>Most modern AI coding assistants ship with prompt/command/flow helpers to guide you through creating AI customization. These commands may be invoked via <code>/</code> in the input field of the tool you chose or, in the case of Copilot, from the UI.</p>
<p>In the demo below, you'll see what your first steps with GitHub Copilot can look like, and how to create the most important piece of customization: always-on instructions.</p>
<p><a class="embed-card" href="https://youtu.be/abna5gginNY">https://youtu.be/abna5gginNY</a></p>

<h3>How to create a review prompt</h3>
<p>Next, we'll create a prompt for reviews. In this demo, you can see how I used AI to analyze the last ~500 <code>fix</code> PRs for the most common issues and bad patterns, and then created a command to help catch those.</p>
<p><a class="embed-card" href="https://youtu.be/e0SNStLJ_HE">https://youtu.be/e0SNStLJ_HE</a></p>

<h3>Reviewing code with custom "review" command</h3>
<p>Now we're going to use that command on the last merged commit in the <code>immich</code> repo. You'll see that even though the PR had approvals from humans, there are still a couple of issues, albeit minor, that were easily surfaced via our review command.</p>
<p><a class="embed-card" href="https://youtu.be/LsdWcUlHZkc">https://youtu.be/LsdWcUlHZkc</a></p>

<h3>General tips on AI Agent mode</h3>
<p>Here I show you how a good agent flow can look. Since there is no single project or stack everyone is familiar with, instead of generating code, I opted to generate an article about <code>immich</code>, as this is something everyone can understand. Code generation would work exactly the same.</p>
<p>Pay specific attention to prompt instructions regarding subagents and <code>#askQuestions</code> tool calls to avoid assumptions.</p>
<p>Also note that since this all happens within one request, I'm billed only once, regardless of how long it runs or how many tokens are consumed. Hence, in this case, where we have a per-request billing model, instructing the model to ask questions is even more beneficial.</p>
<p><a class="embed-card" href="https://youtu.be/-5p0ZsgoL1I">https://youtu.be/-5p0ZsgoL1I</a></p>

<h1>Your next steps</h1>
<h2>Call to action</h2>
<blockquote>
<p>💡 Get inspired by the presented setup</p>
</blockquote>
<blockquote>
<p>📄 Use the /init command to get a template to adjust, or do it manually</p>
</blockquote>
<blockquote>
<p>♻ Iterate over the instructions continuously as you see mistakes Copilot makes</p>
</blockquote>
<blockquote>
<p>🤖 Benefit from AI-assisted code generation and review, and advance your AI skills</p>
</blockquote>
<h2>Reach out to me</h2>
<p>Feel free to ask questions here, connect with me on LinkedIn, come to my next public talk, or invite me as a speaker. Let me know what you liked or disliked, and what you'd like to learn next; I have a lot to share.</p>
<p>My socials:<br />LinkedIn: <a href="https://www.linkedin.com/in/vladkrv">https://www.linkedin.com/in/vladkrv</a><br />GitHub: <a href="https://github.com/vladkrv">https://github.com/vladkrv</a></p>
]]></content:encoded></item></channel></rss>