1. Developers are accountable
We know. This has been rule #1 since the beginning. But when your new pair programmer is an AI that can generate 100 lines of plausible-but-flawed code in 3 seconds, “accountability” takes on a new, more urgent meaning. It's no longer just about owning what you type; it's about being a deeply skeptical senior reviewer for every line of code you accept.
Example: Where accountability matters
An AI assistant is asked to process a list of user IDs from a database. Its output looks perfectly reasonable at first glance.
What the AI wrote:
// AI-generated function to fetch and process user data
function processUsers(userIds) {
userIds.forEach(id => {
const user = db.fetchUser(id); // I/O call
if (user) {
service.process(user); // Another potentially slow operation
}
});
}
The accountability gap: A junior dev might accept this. You, the accountable expert, see a major performance bottleneck. The forEach
loop makes sequential database calls. 100 users = 100 serial network requests. This code is a future outage waiting to happen. Accountability here means catching this subtle but critical flaw.
The accountable correction:
// Human-corrected, robust version
async function processUsers(userIds) {
const userPromises = userIds.map(id => db.fetchUser(id)); // Run requests in parallel
const users = await Promise.all(userPromises); // Wait for all to complete
for (const user of users) {
if (user) {
service.process(user);
}
}
}
2. Over document your project context
In the AI era, providing context is a two-level game: furnishing AI tools with high-level project architecture for long-term understanding, and providing rich, task-specific instructions through prompts for immediate action.
Part 1: Feed the AI your architecture (The macro context)
Your AI coding assistant is like a new team member who needs to be onboarded. It can't understand your project's intentions or structure by magic. Provide it with the same high-level documentation you'd give a human developer.
- Maintain key architectural documents: Keep artifacts like these up-to-date, as they provide critical context for AI tools to generate code that fits your system:
- Diagrams: Use tools like Mermaid to create clear diagrams of your system's architecture, data flow, and service interactions.
- Design docs & ADRs: Maintain concise Architecture Decision Records and design documents that explain the why behind your technical choices.
- Project structure files: A well-defined
README.md
and clear folder structure act as a map for both humans and AI agents.
Part 2: Craft actionable prompts (The micro context)
Once the AI has the high-level map, you need to give it clear, street-level directions for the specific task at hand. This is where prompt engineering comes in.
A useless prompt (Vague directions):
“Write a function that validates user input.”
An actionable prompt (Specific directions):
“I'm using the Zod validation library. My existing error handling pattern is in src/errors/ApiError.ts
. Write a validation function for a new user signup form. The user schema requires:
email
: must be a valid email.password
: must be a string, min 10 chars, 1 uppercase, 1 number, 1 special character.age
: must be an integer >= 18.
Throw ApiError
for validation failures.”
The first prompt gives you generic code. The second prompt gives you code that fits your architecture. That's the right kind of documentation.
Your prompt is your new design doc. Make it count.
3. Keep it simple
“Keep it simple” has always been good advice. Now, it's a technical prerequisite. AI tools are powerful, but they are easily confused by complexity. They struggle to analyze, refactor, or debug code with high cognitive complexity because large context windows and complex control flows degrade the model's ability to reason effectively. Enforcing simplicity is no longer just for your human colleagues; it's to ensure your AI tools can even work effectively.
Example: Enforce simplicity with concrete guardrails
Don't just talk about simplicity, enforce it in your quality gates.
- Function length: Keep functions under 50-100 lines.
- Cognitive complexity: Keep it below 15.
- Code duplication: Aim for 0%.
- Nesting-depth: Keep it under 4 levels.
This isn't just a style guide. It's a set of technical specs to make your codebase “AI-ready.”
4. Absolutely, positively no stray code
AI assistants are eager to please, which often makes them verbose. They generate boilerplate, redundant comments, and “just-in-case” logic that adds clutter and cognitive overhead. Your job is to be the ruthless editor who prunes this back to its essential core.
Example: The git diff
is your best Friend
Here's what an AI, aiming to be overly explicit, might generate to check user permissions. Notice the redundant isAdmin
variable.
// Function to check if a user has admin privileges
function checkAdmin(user) {
// First, check if the user object exists
if (user) {
// Next, check if the user has a roles array
const roles = user.roles;
if (roles && Array.isArray(roles)) {
// Check if the 'admin' role is included in the array
const isAdmin = roles.includes('admin');
if (isAdmin) {
return true; // The user is an admin
}
}
}
// If any check fails, the user is not an admin
return false;
}
This code works by performing a series of nested checks. It first verifies that user
isn't null, then checks that user.roles
exists and is an array, and finally checks if that array includes the string 'admin'
. It's very defensive and safe. There is no bug with the isAdmin
variable, but its usage is redundant. The code declares the isAdmin
constant and then immediately uses it in an if
statement to return true
. This could be simplified to if (roles.includes('admin')) { return true; }
. This redundancy adds to the cognitive load of the function.
The diff
view
// Function to check if a user has admin privileges
- function checkAdmin(user) {
- // First, check if the user object exists
- if (user) {
- // Next, check if the user has a roles array
- const roles = user.roles;
- if (roles && Array.isArray(roles)) {
- // Check if the 'admin' role is included in the array
- const isAdmin = roles.includes('admin');
- if (isAdmin) {
- return true; // The user is an admin
- }
- }
- }
- // If any check fails, the user is not an admin
- return false;
- }
+ // Check if user is an admin
+ function checkAdmin(user) {
+ return !!user?.roles?.includes('admin');
+ }
The habit isn't just avoiding stray code, it's the continuous act of refactoring verbose AI output into clean, idiomatic code.
5. Analyze everything
“Analyze everything” sounds exhausting. But it doesn't mean you should analyze everything manually. It means you must accept that AI-driven development speed requires AI-driven analysis speed. You cannot possibly keep up otherwise. The only way to do this is with aggressive automation.
Example: A modern, automated analysis workflow
- In the IDE: As AI generates code, SonarQube IDE extension provides immediate feedback on bugs, vulnerabilities, security hotspots, and code smells.
- On commit: A pre-commit hook runs linters and formatters, ensuring no noise ever enters the repository.
- On Pull Request: A SonarQube quality gate performs a deep analysis. It is the ultimate authority that blocks merges that don't meet the project's standards for security, reliability, and maintainability.
This automated pipeline is what “analyzing everything” actually looks like in practice.
6. Mandatory unit tests
Unit tests have always been mandatory for professional teams. What's new is that the chore of writing them can now be largely delegated. AI is fantastic at generating boilerplate tests. The habit is no longer just “write tests,” but “use AI to generate tests, then apply your expertise to perfect them.”
Example: Augmenting the AI's “happy path” test
You ask an AI to test a sum(numbers)
function. It will generate the obvious case.
AI-generated test:
test('sums up an array of numbers', () => {
expect(sum([1, 2, 3])).toBe(6);
});
This is a good start. Your job is to add the crucial edge cases the AI didn't consider.
Human-augmented test suite:
// Human-added tests for edge cases
test('returns 0 for an empty array', () => {
expect(sum([])).toBe(0);
});
test('ignores null or undefined values', () => {
expect(sum([1, null, 2, undefined, 3])).toBe(6);
});
test('correctly sums negative numbers', () => {
expect(sum([-1, -2, 3])).toBe(0);
});
7. Rigorous code reviews
Code reviews can be slow when focused on minor issues. But when you automate the small stuff and add a new layer of AI-specific assurance, the “rigor” in your reviews can finally shift to what humans do best: strategic thinking.
This creates an evolved review process where the labor is divided intelligently.
What's handled by automated analysis & AI Assurance:
Your first line of defense is a system that automatically vets every line of code. This includes:
- Standard code quality: Catches maintainability issues like high complexity, duplication, and code smells.
- Sonar's AI Code Assurance: This crucial new layer is specifically trained to find the unique, subtle flaws that AI models often introduce, such as:
- Tainted data flows that create security vulnerabilities.
- Hidden performance bottlenecks (like sequential I/O in a loop).
- Subtle logical flaws that a basic linter would miss.
What requires human expertise:
With the confidence that the code is functionally sound and secure, the human reviewer can focus exclusively on the high-level questions that require true understanding:
- Does this code fulfill the business requirement?
- Is this the right long-term architectural approach?
- Did the AI misunderstand the core intent of the ticket?
A rigorous review is no longer about finding syntax errors; it’s about confirming the code is strategically sound, knowing a whole class of AI-introduced risks has already been eliminated.
Tools for good habits
Check out these interactive demos that show how SonarQube helps put principles into practice, integrating seamlessly into existing SDLC workflows so that habits become second nature.