Improving Code Speed, Readability, and Memory Usage in Engineering

Explore top LinkedIn content from expert professionals.

Summary

Improving code speed, readability, and memory usage in engineering means writing software that runs faster, is easier to understand, and uses less computer memory. In practical terms, these improvements help developers maintain code more easily and allow programs to perform well, especially with large datasets or on resource-limited systems.

  • Keep it simple: Use clear logic and concise functions that handle one task each to make your code easier to read and maintain.
  • Use lazy evaluation: Return data only when it's needed with techniques like yield, so you avoid loading everything into memory at once and speed up processing.
  • Reuse resources: Employ methods like object pooling and stack allocation to minimize unnecessary memory allocations and reduce overhead.
Summarized by AI based on LinkedIn member posts
  • View profile for Dr Milan Milanović

    Chief Roadblock Remover and Learning Enabler | Helping 400K+ engineers and leaders grow through better software, teams & careers | Author of Laws of Software Engineering | Leadership & Career Coach

    272,802 followers

    𝗧𝗼𝗽 𝟭𝟬 𝗡𝗔𝗦𝗔 𝗥𝘂𝗹𝗲𝘀 𝗙𝗼𝗿 𝗕𝗲𝘁𝘁𝗲𝗿 𝗖𝗼𝗱𝗶𝗻𝗴 The Power of 10 Rules was formulated in 2006 by Gerard J. Holzmann at NASA's JPL Laboratory for Reliable Software, aiming to eliminate certain C coding practices that make code hard to review or statically analyze. These rules are part of a more comprehensive set of JPL coding standards. The rules are: 𝟭. 𝗔𝘃𝗼𝗶𝗱 𝗖𝗼𝗺𝗽𝗹𝗲𝘅 𝗙𝗹𝗼𝘄: Steer clear of tricky control structures; stick to simple loops and conditionals. 𝟮. 𝗕𝗼𝘂𝗻𝗱 𝗟𝗼𝗼𝗽𝘀: Ensure loops have a clear exit point to prevent endless looping. 𝟯. 𝗔𝘃𝗼𝗶𝗱 𝗛𝗲𝗮𝗽 𝗔𝗹𝗹𝗼𝗰𝗮𝘁𝗶𝗼𝗻: Favor stack or static memory allocation to dodge memory leaks. 𝟰. 𝗨𝘀𝗲 𝗦𝗵𝗼𝗿𝘁 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝘀: Keep functions concise, handling a single task. This aligns well with Clean Code practices (Single Responsibility Principle). 𝟱. 𝗥𝘂𝗻𝘁𝗶𝗺𝗲 𝗔𝘀𝘀𝗲𝗿𝘁𝗶𝗼𝗻𝘀: Utilize assertions to catch unexpected conditions. 𝟲. 𝗟𝗶𝗺𝗶𝘁𝗲𝗱 𝗗𝗮𝘁𝗮 𝗦𝗰𝗼𝗽𝗲: Keep the scope narrow to maintain clarity. Use the smallest scope for your variables (e.g., private or protected in C#). 𝟳. 𝗖𝗵𝗲𝗰𝗸 𝗥𝗲𝘁𝘂𝗿𝗻 𝗩𝗮𝗹𝘂𝗲𝘀: Always check the return values of functions, handling any errors. 𝟴. 𝗦𝗽𝗮𝗿𝘀𝗲 𝗣𝗿𝗲𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗼𝗿 𝗨𝘀𝗲: Minimize preprocessor directives for readability. 𝟵. 𝗟𝗶𝗺𝗶𝘁 𝗣𝗼𝗶𝗻𝘁𝗲𝗿 𝗨𝘀𝗲: Simplify pointer use and avoid function pointers for clearer code. 𝟭𝟬. 𝗖𝗼𝗺𝗽𝗶𝗹𝗲 𝗪𝗶𝘁𝗵 𝗔𝗹𝗹 𝗪𝗮𝗿𝗻𝗶𝗻𝗴𝘀 𝗘𝗻𝗮𝗯𝗹𝗲𝗱: Address all compiler warnings to catch potential issues early. This is often neglected in many projects! #technology #softwareengineering #programming #techworldwithmilan #developers

  • View profile for Alexandre Germano Souza de Andrade

    Senior Software Engineer | Backend-Focused Fullstack Developer | .NET | C# | Angular | TypeScript | JavaScript | Azure | SQL Server

    10,611 followers

    💡 𝗨𝘀𝗶𝗻𝗴 𝘆𝗶𝗲𝗹𝗱 𝗳𝗼𝗿 𝗕𝗲𝘁𝘁𝗲𝗿 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗮𝗻𝗱 𝗦𝗶𝗺𝗽𝗹𝗲𝗿 𝗜𝘁𝗲𝗿𝗮𝘁𝗼𝗿𝘀 Manually implementing an enumerator can be tedious, requiring a lot of boilerplate code. Thankfully, yield simplifies the creation of custom iterators by allowing you to return elements lazily without having to manage state explicitly. Instead of building complex IEnumerator implementations, you can use yield return to produce values on demand, improving readability, performance, and memory efficiency. 🔍 𝗪𝗵𝘆 𝗨𝘀𝗲 𝘆𝗶𝗲𝗹𝗱? • Lazily iterate over large datasets without loading everything into memory. • Simplifies custom iterators by removing manual state management. • Enhances performance by generating values only when needed. 📌 𝗖𝗼𝗺𝗺𝗼𝗻 𝗨𝘀𝗲 𝗖𝗮𝘀𝗲𝘀: • 𝘓𝘢𝘳𝘨𝘦 𝘍𝘪𝘭𝘦 𝘗𝘳𝘰𝘤𝘦𝘴𝘴𝘪𝘯𝘨: Instead of loading an entire file into memory, yield allows reading and processing lines or blocks of data individually. This is crucial for files that exceed available memory capacity. • 𝘋𝘢𝘵𝘢𝘣𝘢𝘴𝘦 𝘘𝘶𝘦𝘳𝘪𝘦𝘴: When querying databases, especially with large result sets, yield enables returning results in a paginated manner. This prevents memory overload and improves performance by processing results on demand. 💡 𝗖𝗼𝗻𝗰𝗹𝘂𝘀𝗶𝗼𝗻: Using yield return prevents excessive memory usage and optimizes performance when dealing with large collections or external data sources like files, databases, or APIs. 🚀 #DotNet #CSharp #Performance #BestPractices #CodingTips #SoftwareDevelopment #DevCommunity #CodeOptimization #CleanCode #Programming

  • View profile for Pooya Eimandar

    Stay Hungry Stay Foolish

    5,753 followers

    A paper released last year by Bilokon and one of his PhD students, Burak Gunduz, looks at 12 techniques for reducing latency in C++ code, as follows: 🚀 Lock-free programming: A concurrent programming paradigm involving multi-threaded algorithms which, unlike their traditional counterparts, do not employ the usage of mutual exclusion mechanisms, such as locks, to arbitrate access to shared resources. 🚀 SIMD instructions: Instructions that take advantage of the parallel processing power of contemporary CPUs, allowing the simultaneous execution of multiple operations. 🚀 Mixing data types: When a computation involves both float and double types, implicit conversions are required. If only float computations are used, performance improves. 🚀 Signed vs unsigned: Ensuring consistent signedness in comparisons to avoid conversions. 🚀 Prefetching: Explicitly loading data into cache before it is needed to reduce data fetch delays, particularly in memory-bound applications. 🚀 Branch reduction: Predicting conditional branch outcomes to allow speculative code execution. 🚀 Slowpath removal: Minimizing the execution of rarely executed code paths. 🚀 Short-circuiting: Logical expressions cease evaluation when the final result is determined. 🚀 Inlining: Incorporating the body of a function at each point the function is called, reducing function call overhead and enabling further optimization by the compiler. 🚀 Constexpr: Computations marked as constexpr are evaluated at compile time, enabling constant folding and efficient code execution by eliminating runtime calculations. 🚀 Compile-time dispatch: Techniques like template specialization or function overloading that ensure optimized code paths are chosen at compile time based on type or value, avoiding runtime dispatch and enabling early optimization decisions. 🚀 Cache warming: To minimize memory access time and boost program responsiveness, data is preloaded into the CPU cache before it’s needed. Reference: https://lnkd.in/dDfYJyw6 #technology #tech #cpp #programming

  • View profile for Romain Ottonelli Dabadie

    .Net Enthusiasts | Tech Lead | Microsoft MVP

    27,186 followers

    Using temporary collections? Your memory usage isn't comfortable with that choice. You can relieve memory pressure by improving how you return collections. One of the cleanest and most performant improvements you can make to your C# code is embracing `yield return`. Instead of creating temporary lists, populating them, and returning them later, yield allows you to return elements on demand, dramatically reducing memory allocation. The benefits are significant: - Reduced Memory Usage: No need to store the entire collection in memory at once - Improved Performance: Processing starts immediately as values are yielded - Lazy Evaluation: Items are only generated when consumed, saving processing time - Cleaner Code: Less boilerplate for creating and managing temporary collections This pattern becomes especially powerful when dealing with large datasets or in performance-critical systems where every KB of memory matters. What small optimizations have made the biggest impact in your codebase? Share in the comments below! #csharp #performance #dotnet #software

  • View profile for Carl-Hugo Marcotte

    Author of Architecting ASP.NET Core Applications: An Atypical Design Patterns Guide for .NET 8, C# 12, and Beyond | Software Craftsman | Principal Architect | .NET/C# | AI

    8,726 followers

    🔥Optimizing performances using C#🔥 Welcome to our series' seventh and final post, where we introduce performance optimization techniques for creating and managing variables. When your code demands high performance, understanding how to optimize memory and reduce overhead becomes crucial. This post explores techniques like stack allocation, pointers, and object pooling, which are invaluable when working on performance-critical use cases. These techniques ensure your code runs efficiently without unnecessary memory allocations. Each technique is advanced enough to be the subject of its own post, so consider this an introduction. 📝 Summary Here are key techniques you can use to optimize performance: - Stack allocation: Use Span<T> and stackalloc to allocate memory on the stack, improving performance compared to heap-allocated memory. - Pointers: Use unsafe pointers to access memory directly for highly optimized code. Be cautious since unsafe pointers bypass the runtime's memory safety features. - Fixed buffers: Inside unsafe code, fixed-size buffers offer control over memory layout by inlining the array with the rest of the struct instead of separately on the heap. - Object pooling: Leverage ArrayPool<T> to reuse arrays, almost negating the cost of array creation. - ref struct: Creates the struct on the stack instead of the heap, avoiding heap allocations and ensuring better performance. - Span<T>: Provides performant access to existing memory blocks (e.g., arrays, stack-allocated, or unmanaged memory) while avoiding unnecessary allocations and garbage collection overhead. - Memory<T>: Similar to Span<T> without the limitations because it can be stored on the heap, making it ideal when memory needs to persist beyond the current frame. - in parameters: Pass a struct by reference without copying it, ensuring read-only access. 💬 Comments What’s your experience with optimizing performance? Have you used any of those techniques before? Share your thoughts in the comments! 📣Any deep dive post you would like to see? Let me know in the comments if there are subjects you'd like to explore in more depth or hear about (it doesn't have to be related to this post). 🔑 Important To allow unsafe code to run, you must explicitly allow it, for example, by adding `<AllowUnsafeBlocks>true</AllowUnsafeBlocks>` in your .csproj file. 🔔Note You may never need any of those techniques, yet it's essential to know they exist for the day you do! #CSharp #dotnet #ProgrammingTips #SoftwareDevelopment #CodeOptimization #HighPerformance #PerformanceOptimization #LearnCSharp #AdvancedOptimization

Explore categories