I've spent 9 months figuring out what actually works with AI coding tools—especially on messy, real-world codebases. The breakthrough? Stop letting AI write code until you've reviewed a written plan. Here's the flow that I keep seeing when researching, and Boris has done a great job in collecting the whole thing in his blog: 1. Research Phase: Don't accept verbal summaries. Force deep reads into persistent files. "Read auth/middleware deeply. Write findings in research.md with intricacies." Written artifacts = review surfaces. Catches misunderstandings before they become broken implementations. 2. Planning Phase: Request detailed plans in plan.md—with code snippets, file paths, trade-offs. Not the built-in plan mode (very important!). Custom markdown files you control. 3. Annotation Cycle (the critical part): Review the plan in your editor. Add inline notes directly: - "This breaks OAuth flow" - "Use existing UserService instead" - "Security: validate input here" Send annotated plan back. Repeat 1-6 times until it's right. This is where the main thinking happens! 4. Then—and only then—implement This prevents the most expensive failure: code that works in isolation but breaks everything around it. Pro tip: For standard features, provide reference implementations from open source. Claude with a concrete example >>> Claude designing from scratch. The workflow feels slower at first. But catching architectural mistakes in a 50-line plan.md beats debugging a 500-line implementation that went wrong from line 1. This process is now called RPI (Reserch, Plan, Implement) - have you tried this in your workflows yet? https://lnkd.in/dMP7dCgc
How to Implement Code Self-Review Processes
Explore top LinkedIn content from expert professionals.
Summary
Code self-review is a process where programmers thoroughly check their own code before sharing it with teammates for feedback, helping to catch mistakes and improve clarity early on. Implementing this practice can streamline collaborative reviews and reduce errors, especially when using AI-generated code or working within complex projects.
- Document your plan: Always write out your intended changes and their impact in a dedicated file before you start coding, so you have a clear roadmap to guide your implementation.
- Review line by line: Go through your code as if you’re a team reviewer, making notes on anything unclear or inconsistent, and address issues before submitting for wider review.
- Request targeted feedback: When you share your code, ask colleagues to focus on specific areas like architecture or edge cases, so your review process is more focused and productive.
-
-
In the last 11 years of my career, I’ve participated in code reviews almost daily. I’ve sat through 100s of review sessions with seniors and colleagues. Here’s how to make your code reviews smoother, faster and easier: 1. Start with Small, Clear Commits - Break your changes into logical, manageable chunks. This makes it easier for reviewers to focus and catch errors quickly. 2. Write Detailed PR Descriptions - Always explain the “why” behind the changes. This provides context and helps reviewers understand your thought process. 3. Self-Review Before Submitting - Take the time to review your own code before submitting. You'll catch a lot of your own mistakes and improve your review quality. 4. Ask for Specific Feedback - Don’t just ask for a “review”—be specific. Ask for feedback on logic, structure, or potential edge cases. 5. Don’t Take Feedback Personally - Code reviews are about improving the code, not critiquing the coder. Be open to constructive criticism and use it to grow. 6. Prioritize Readability Over Cleverness - Write code that’s easy to understand, even if it’s less “fancy.” Simple, clear code is easier to maintain and review. 7. Focus on the Big Picture - While reviewing, look at how changes fit into the overall system, not just the lines of code. Think about long-term maintainability. 8. Encourage Dialogue - Reviews shouldn’t be a one-way street. Engage in discussions and collaborate with reviewers to find the best solution. 9. Be Explicit About Non-Blocking Comments - Mark minor suggestions as “nitpicks” to avoid confusion. This ensures critical issues get addressed first. 10. Balance Praise and Criticism - Acknowledge well-written code while offering suggestions for improvement. Positive feedback encourages better work. 11. Always Follow Up - If you request changes or leave feedback, follow up to make sure the feedback is understood and implemented properly. It shows you’re invested in the process. -- P.S: What would you add from your experience?
-
90% of my code is AI-generated. But every PR has my name on it. I learned this the hard way. Early on, I'd generate code with Claude, open a PR, and let my team review it. Fast, right? Except my PRs kept coming back with comments. Inconsistencies. Naming that didn't match the codebase. Edge cases the AI missed that I should have caught. I was treating AI-generated code like it was mine. But I didn't write it. I should have been treating it like a PR from an external contributor. Now my workflow is different. I open every PR as a draft first. Review it myself — line by line, the same way I'd review a teammate's code. Only after it passes my own review do I open it for the team. The shift in thinking: I'm not the author anymore. I'm the first reviewer. The difference was immediate. Fewer comment rounds. Faster approvals. My peers spend their review time on architecture and logic, not catching inconsistencies I should have caught myself. Your name is on the PR. The AI didn't open it. You did. Own it before you share it. Do you self-review your AI-generated PRs? #AITools #CodeReview #SoftwareEngineering
-
Some of y'all asked for an update on the pull request-based workflow that I've been using with Cursor, Claude Code, Codex, and Copilot. (Yes, I'm using all four.) I start out in Linear. I might switch to Github Issues—and I'm happy to talk about why, if anyone is interested. The primary goal here is to plan. I'll write out a high-level set of requirements and acceptance criteria as well as my guidance on how we should approach implementing this. I use Codex CLI (gpt-5.1-codex-max) to help me find all of the relevant utilities and functions that might already exist in the code base—in an attempt to limit agents' tendency to re-invent functions that already exist. I'll take several turns at this until I'm satisfied with the approach. I do this in the morning when my brain is still fresh and my attention to detail is strong. For the Important Stuff™: I will check out a branch locally and usually begin the work by hand. I'm validating my assumptions at this point. I've found that if I can sketch out the rough outline, then an LLM is pretty good at copying my patterns and filling in all of the tedious details. If it's not important or complicated (e.g. "fill in this missing test coverage" or "add this lint rule and then get everything passing"), I'll hand it off to either a Cursor Background Agent or Claude Code for the Web. Next, it's time to open up a pull request. I'll go through this line-by-line. This feels much more civilized than watching my terminal like a hawk. I'll also have three code review agents do their reviews: Cursor Bugbot, Codex, and Copilot. My comments are typically architectural or calling out places where it duplicated existing functionality or cut corners. The AI agents are shockingly good at finding stuff like memory leaks, etc. Between the agents and myself, there might be 10–20 comments. I'll review all of them and dismiss the ones that I don't think are important or add context. If I mention the issue in the title of the pull request, Linear will add in the ticket description which provides important context about what we were trying to do. Cursor and GitHub Copilot will also summarize the changes in the pull request. Linear automatically moves the ticket from in-progress to in-review to done. As of this writing, none of the tools are all that good at working on an existing branch. Cursor has "Fix in Cursor" and "Fix in Web", but it only adds those buttons to the comments it leaves and it's still one comment at a time. I pull down the branch and then I have Claude Code use the Github MCP server to triage all of review comments and apply the changes. This triggers another set of reviews and I continue this loop until I feel good about the pull request. The tedious part has been juggling all of the branches. I've been working on a tool to help manage all of this that I'm hoping I can show off to all y'all next week. TL;DR: Overall, it's been a success but the entire process needs better tooling.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development