Best Programming Practices for Clean Code

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    DeepLearning.AI, AI Fund and AI Aspire

    2,463,255 followers

    Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]

  • View profile for Milan Jovanović
    Milan Jovanović Milan Jovanović is an Influencer

    Practical .NET and Software Architecture Tips | Microsoft MVP

    276,084 followers

    I've been using Clean Architecture for 6+ years. Here’s why I think it’s amazing. 👇 The biggest pain in enterprise systems? A lack of structure. Every project reinvents the wheel. Every team builds layers differently. And knowledge doesn’t transfer between systems. But there’s a proven way to fix this. It’s called Clean Architecture. It’s not about how many projects you create. It’s not about fancy patterns. ✅ It’s about the direction of dependencies. Inner layers (domain, app) define abstractions. Outer layers (infra, presentation) implement those abstractions. Never the other way around. That’s it. That’s the rule. You can package this as: - Layers (domain, app, infra, web) - Vertical slices (grouped per feature) - Components (layers + vertical slices) They all work — if you follow the rule. What are the benefits? - Modular code - Clear separation of concerns - Easy-to-test business logic - Faster onboarding - Loosely coupled components Clean Architecture has helped me ship excellent products. And I’ll keep using it because it works. Want to simplify your development process? Grab my free Clean Architecture template here: https://lnkd.in/eDgfyWKB

  • View profile for Cole Medin

    Technology Leader and Entrepreneur | AI Educator & Content Creator | Founder of Dynamous AI

    8,350 followers

    Karpathy's LLM Knowledge Bases post went viral this week, and rightfully so. The idea is simple: raw documents go in, an LLM processes them into a structured wiki, your agent queries that wiki at runtime. No fancy RAG pipeline, no vector database. Just compiled knowledge your agent can navigate. Everyone is applying this to external data: docs, papers, research articles. I went a different direction. The raw material in my version isn't articles from the web. It's Claude Code session logs. Every time I work on the codebase, hooks automatically capture what got built, what decisions were made, what didn't work and why. A daily flush script compiles those logs into wiki articles in my Obsidian vault. When I start a new session, the agent searches that wiki before writing a single line of code. The result feels different from a good CLAUDE.md. It's not just static documentation! It's a living record of every architectural decision, every "we tried X and it broke because Y." Institutional memory, but searchable. The loop compounds quickly. Ask a question, the agent finds a relevant wiki article from three weeks ago, gives a better answer, and that answer eventually feeds back into the wiki. The longer you use it, the more context the agent has about your codebase specifically (not codebases in general, yours). Setup is one prompt into Claude Code: hooks, daily flush script, wiki structure, all generated automatically. Karpathy's insight was "stop RAG-ing raw documents, start compiling them." Most developers are losing context between every session. All that institutional knowledge evaporates. Compiling your session logs applies the same idea one level closer to home. I just posted a full breakdown on YouTube with the complete architecture walkthrough and a live demo of setting up the whole system. Link to my GitHub repo in the replies too!

  • View profile for Julio Casal

    .NET • Azure • Agentic AI • Platform Engineering • DevOps • Ex-Microsoft

    66,143 followers

    Most ASP .NET Core developers get middleware order wrong. And it breaks their apps in ways that are incredibly hard to debug. Authentication runs before routing? Your auth checks never fire. CORS after authorization? Your frontend gets mysterious 403s. Exception handler at the bottom? You lose error visibility entirely. Here's the thing: middleware order isn't a suggestion. It's a contract. 𝗧𝗵𝗲 𝗢𝗳𝗳𝗶𝗰𝗶𝗮𝗹 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲 (𝟭𝟲 𝗦𝘁𝗲𝗽𝘀) The image below shows every built-in middleware in the exact order Microsoft documents them. But here's what the docs don't emphasize enough: 𝗪𝗵𝘆 𝗘𝘅𝗰𝗲𝗽𝘁𝗶𝗼𝗻 𝗛𝗮𝗻𝗱𝗹𝗲𝗿 𝗶𝘀 #𝟭 It wraps everything. If any middleware below it throws, you get a clean error response instead of a raw 500. Move it lower and you lose visibility into failures above it. 𝗪𝗵𝘆 𝗦𝘁𝗮𝘁𝗶𝗰 𝗙𝗶𝗹𝗲𝘀 𝗰𝗼𝗺𝗲𝘀 𝗯𝗲𝗳𝗼𝗿𝗲 𝗥𝗼𝘂𝘁𝗶𝗻𝗴 It short-circuits the pipeline. If the request matches a file in wwwroot, the remaining 12 middleware never execute. That's a massive performance win for every CSS, JS, and image request. 𝗪𝗵𝘆 𝗔𝘂𝘁𝗵𝗲𝗻𝘁𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗺𝘂𝘀𝘁 𝗰𝗼𝗺𝗲 𝗯𝗲𝗳𝗼𝗿𝗲 𝗔𝘂𝘁𝗵𝗼𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻 You can't check permissions if you don't know who the user is yet. Sounds obvious, but I've seen this reversed in production codebases more than once. 𝗪𝗵𝘆 𝗖𝗢𝗥𝗦 𝘀𝗶𝘁𝘀 𝗮𝗳𝘁𝗲𝗿 𝗥𝗼𝘂𝘁𝗶𝗻𝗴 CORS needs to know which endpoint was matched to apply the right policy. Put it before routing and it can't resolve endpoint-specific CORS attributes. 𝗧𝗵𝗲 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲 𝗧𝗿𝗮𝘃𝗲𝗹𝘀 𝗕𝗮𝗰𝗸 𝗨𝗽 This is the part most people miss. The response passes through every middleware in reverse order. That's why Response Compression sits near the bottom: it compresses on the way back up, after the endpoint has written the response body. 𝗧𝗵𝗲 𝗥𝘂𝗹𝗲 If you're unsure where your custom middleware goes, ask yourself: "Does it need to know the user?" If yes, it goes after Authentication. "Does it need to know the endpoint?" If yes, it goes after Routing. Save this image. Pin it. You'll need it. P.S. If you want to master the full ASP .NET Core stack (not just middleware), grab my free .NET Developer Roadmap. It covers 110+ topics every backend dev should know. 🗺️ https://lnkd.in/gmb6rQUR

  • View profile for Sergei Grozov

    Senior Backend Engineer (.NET) | AI‑Augmented Development | Agentic Workflows | Scalable Systems | Vibe‑Driven Coding

    5,863 followers

    Don't Let DRY Make Your Code Too Thirsty The DRY (Don't Repeat Yourself) principle means that every piece of knowledge should only exist once in your codebase. Sounds great, right? But when does DRY become TOO DRY? Sometimes, in an effort to eliminate all repetition, we can end up over-abstracting our code. This can lead to code that is hard to understand, maintain, or extend. For example, if you find yourself creating overly generic methods or classes that try to handle too many scenarios, you might be taking DRY too far. This makes the code confusing for others (or even your future self) and increases the chance of introducing bugs when changes are needed. DRY isn't always the best choice. In cases like DTOs or database schemas, repetition can be more readable and clear. Reusing too much can make your design rigid and harder to change when requirements evolve. Pros of DRY: • Reduces repetition, making your code easier to maintain. • Less copy-pasting means fewer chances for mistakes and errors. • Changes in logic require fewer edits, which reduces the risk of bugs. Cons of DRY: • Too much abstraction can make your code hard to understand. • Reusing too much logic across different parts can make changes risky and cause unexpected problems. Where have you found DRY to be more trouble than it's worth? How do you balance avoiding repetition without over-complicating your code? #DRY #SoftwareEngineering #ProgrammingPrinciples #CleanCode #CSharp

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    719,437 followers

    When working with multiple LLM providers, managing prompts, and handling complex data flows — structure isn't a luxury, it's a necessity. A well-organized architecture enables: → Collaboration between ML engineers and developers → Rapid experimentation with reproducibility → Consistent error handling, rate limiting, and logging → Clear separation of configuration (YAML) and logic (code) 𝗞𝗲𝘆 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 𝗧𝗵𝗮𝘁 𝗗𝗿𝗶𝘃𝗲 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 It’s not just about folder layout — it’s how components interact and scale together: → Centralized configuration using YAML files → A dedicated prompt engineering module with templates and few-shot examples → Properly sandboxed model clients with standardized interfaces → Utilities for caching, observability, and structured logging → Modular handlers for managing API calls and workflows This setup can save teams countless hours in debugging, onboarding, and scaling real-world GenAI systems — whether you're building RAG pipelines, fine-tuning models, or developing agent-based architectures. → What’s your go-to project structure when working with LLMs or Generative AI systems? Let’s share ideas and learn from each other.

  • View profile for Jamil Farshchi
    Jamil Farshchi Jamil Farshchi is an Influencer

    Equifax CTO • UKG Board Member • FBI Strategic Advisor • LinkedIn Top Voice in Innovation and Technology

    44,723 followers

    AI didn't create a new problem. It put a price tag on an old one. Every company has a Dave. Nine years in. Knows where the bodies are buried. Knows which service breaks if you breathe on it wrong. Dave IS the documentation. But AI can't ask Dave. We've learned this lesson three times now. Security: you can't protect what you can't see. Cloud: you can't migrate what you don't understand. AI: you can't automate what you haven't documented. Same lesson. Same boring work nobody ever did. Three times. Every piece of knowledge in Dave's head instead of the repo is context your AI tools will never have. Without it, they guess. Confidently. At scale. And it's not just code. It’s the helpdesk KB article nobody has touched since 2019. It’s the IR runbook you promised yourself you'd write after the last 2am P1... but forgot to. The AI isn't failing. We're giving it garbage, but expecting gold. Coding tools. Contact Center. Workflows. Decision logic. Same pattern: undocumented, outdated, contradictory, tribal. It’s like doing a new hire eval with no training… and blaming the new hire for poor performance. The open-source community figured it out: 60,000+ projects ship standardized context files so every AI tool knows how to work in that codebase. No tool selection pit fights. No governance pitfalls.  Here's the thing: the documentation security teams requested, the architecture maps cloud needed, and the context AI requires? It’s the same work. Not similar. The same. Close it and everything compounds. Security gets visibility. AI performs. New engineers ramp faster. And you stop being one Dave-retirement away from a knowledge crisis. Documentation debt is now performance debt. The teams pulling ahead right now aren't the ones with the best AI tools. They're the ones that finally wrote stuff down. #TheBoringWork #CTO #Cybersecurity #EngineeringLeadership

  • View profile for Allen Holub

    I help you build software better & build better software.

    33,548 followers

    Last night, I was chatting in the hotel bar with a bunch of conference speakers at Goto-CPH about how evil PR-driven code reviews are (we were all in agreement), and Martin Fowler brought up an interesting point. The best time to review your code is when you use it. That is, continuous review is better than what amounts to a waterfall review phase. For one thing, the reviewer has a vested interest in assuring that the code they're about to use is high quality. Furthermore, you are reviewing the code in a real-world context, not in isolation, so you are better able to see if the code is suitable for its intended purpose. Continuous review, of course, also leads to a culture of continuous refactoring. You review everything you look at, and when you find issues, you fix them. My experience is that PR-driven reviews rarely find real bugs. They don't improve quality in ways that matter. They DO create bottlenecks, dependencies, and context-swap overhead, however, and all that pushes out delivery time and increases the cost of development with no balancing benefit. I will grant that two or more sets of eyes on the code leads to better code, but in my experience, the best time to do that is when the code is being written, not after the fact. Work in a pair, or better yet, a mob/ensemble. One of the teams at Hunter Industries, which mob/ensemble programs 100% of the time on 100% of the code, went a year and a half with no bugs reported against their code, with zero productivity hit. (Quite the contrary—they work very fast.) Bugs are so rare across all the teams, in fact, that they don't bother to track them. When a bug comes up, they fix it. Right then and there. If you're working in a regulatory environment, the Driver signs the code, and then any Navigator can sign off on the review, all as part of the commit/push process, so that's a non-issue. There's also a myth that it's best if the reviewer is not familiar with the code. I *really* don't buy that. An isolated reviewer doesn't understand the context. They don't know why design decisions were made. They have to waste a vast amount of time coming up to speed. They are also often not in a position to know whether the code will actually work. Consequently, they usually focus on trivia like formatting. That benefits nobody.

  • No, you won't be vibe coding your way to production. Not if you prioritise quality, safety, security, and long-term maintainability at scale. Recently coined by former OpenAI co-founder Andrej Karpathy, "vibe coding" describes an AI-coding approach where developers focus on iterative prompt refinement to generate desired output, with minimal concern for the LLM-generated code implementation. At Canva, our assessment — based on extensive and ongoing evaluation of AI coding assistants — is that these tools must be carefully supervised by skilled engineers, particularly for production tasks. Engineers need to guide, assess, correct, and ultimately own the output as if they had written every line themselves. Our experimentation consistently reveals errors in tool-generated code ranging from superficial (style inconsistencies) to dangerous (incorrect, insecure, or non-performant code). Our engineering culture is built on code ownership and peer review. Rather than challenging these principles, our adoption of AI coding assistants has reinforced their importance. We've implemented a strict "human in the loop" approach that maintains rigorous peer review and meaningful code ownership of AI-generated code. Vibe coding presents significant risks for production engineering: - Short-term: Introduction of defects and security vulnerabilities - Medium to long-term: Compromised maintainability, increased technical debt, and reduced system understandability From a cultural perspective, vibe coding directly undermines peer review processes. Generating vast amounts of code from single prompts effectively DoS attacks reviewers, overwhelming their capacity for meaningful assessment. Currently we see one narrow use case where vibe coding is exciting: spikes, proofs of concept, and prototypes. These are always throwaway code. LLM-assisted generation offers enormous value in rapidly testing and validating ideas with implementations we will ultimately discard. With rapidly expanding LLM capabilities and context windows, we continuously reassess our trust in LLM output. However, we maintain that skilled engineers play a critical role in guiding, assessing, and owning tool output as an immutable principle of sound software engineering.

  • View profile for Rahul Agarwal

    Staff ML Engineer | Meta, Roku, Walmart | 1:1 @ topmate.io/MLwhiz

    45,162 followers

    Few Lessons from Deploying and Using LLMs in Production Deploying LLMs can feel like hiring a hyperactive genius intern—they dazzle users while potentially draining your API budget. Here are some insights I’ve gathered: 1. “Cheap” is a Lie You Tell Yourself: Cloud costs per call may seem low, but the overall expense of an LLM-based system can skyrocket. Fixes: - Cache repetitive queries: Users ask the same thing at least 100x/day - Gatekeep: Use cheap classifiers (BERT) to filter “easy” requests. Let LLMs handle only the complex 10% and your current systems handle the remaining 90%. - Quantize your models: Shrink LLMs to run on cheaper hardware without massive accuracy drops - Asynchronously build your caches — Pre-generate common responses before they’re requested or gracefully fail the first time a query comes and cache for the next time. 2. Guard Against Model Hallucinations: Sometimes, models express answers with such confidence that distinguishing fact from fiction becomes challenging, even for human reviewers. Fixes: - Use RAG - Just a fancy way of saying to provide your model the knowledge it requires in the prompt itself by querying some database based on semantic matches with the query. - Guardrails: Validate outputs using regex or cross-encoders to establish a clear decision boundary between the query and the LLM’s response. 3. The best LLM is often a discriminative model: You don’t always need a full LLM. Consider knowledge distillation: use a large LLM to label your data and then train a smaller, discriminative model that performs similarly at a much lower cost. 4. It's not about the model, it is about the data on which it is trained: A smaller LLM might struggle with specialized domain data—that’s normal. Fine-tune your model on your specific data set by starting with parameter-efficient methods (like LoRA or Adapters) and using synthetic data generation to bootstrap training. 5. Prompts are the new Features: Prompts are the new features in your system. Version them, run A/B tests, and continuously refine using online experiments. Consider bandit algorithms to automatically promote the best-performing variants. What do you think? Have I missed anything? I’d love to hear your “I survived LLM prod” stories in the comments!

Explore categories