Optimization Algorithms in Engineering

Explore top LinkedIn content from expert professionals.

Summary

Optimization algorithms in engineering are mathematical tools used to find the best solution for complex problems, such as improving designs, reducing energy use, or managing resources, by searching through possible options and choosing the one that meets all constraints and goals. These methods help engineers solve challenges like minimizing drag in airfoils, efficiently scheduling resources, and balancing power and cooling in large data centers.

  • Explore algorithm choices: Try different optimization approaches—from gradient-based methods to quantum-inspired algorithms—to address unique engineering challenges and scale solutions for larger projects.
  • Build structured models: Clearly define variables, constraints, and objectives before applying optimization algorithms to ensure accurate and practical results.
  • Iterate with real data: Continuously update and refine your optimization models using actual performance metrics and telemetry to improve outcomes and adapt to changing conditions.
Summarized by AI based on LinkedIn member posts
  • View profile for Mohamed Amine Abassi, PhD

    Postdoc Scholar Researcher

    3,577 followers

    Optimization powers a huge slice of modern work: we train neural networks by minimizing loss functions, rebalance portfolios by minimizing risk for a target return, tune engineering designs (e.g., airfoil shapes in CFD) to reduce drag under constraints, route delivery fleets to cut fuel costs, and even fit scientific models to experimental data. In each case, we’re searching a (sometimes massive) landscape for parameters that make an objective f(⋅) as small as possible. Gradient Descent (GD) is the starting point: follow the local slope downhill—simple and reliable, though it can zig-zag and slow down in narrow, ill-conditioned valleys. Stochastic Gradient Descent (SGD) makes this scalable by using a small random batch to estimate the slope, dramatically reducing cost per step and enabling learning on huge datasets—even if the steps are noisy and need schedules or momentum to stabilize. Conjugate Gradient (CG) (often called “conjugate gradient descent” informally) fixes GD’s zig-zag on large symmetric positive-definite (SPD) quadratic problems by building search directions that don’t “undo” each other, achieving much faster convergence without storing big matrices (and with nonlinear variants + line search for general smooth problems). Finally, L-BFGS brings second-order smarts to nonlinear optimization at near first-order cost by approximating curvature from a short history of gradients and steps, delivering larger, better-aimed updates and typically far fewer iterations than vanilla GD—especially on smooth, ill-conditioned objectives. #Optimization #Mathematics #Linear_Algebra #CFD #Numerical_Methods #L_BFGS #Gradient_descent

  • View profile for Srishtik Dutta

    SWE-2 @Google | Ex - Microsoft, Wells Fargo | ACM ICPC ’20 Regionalist | 6🌟 at Codechef | Expert at Codeforces | Guardian (Top 1%) on LeetCode | Technical Content Writer ✍️| 125K+ on LinkedIn

    131,869 followers

    Thought you know all about DP? Here’s an expanded tour of DP optimization techniques, from the fundamentals all the way to advanced tricks: 1. Top-Down vs. Bottom-Up 🔹 Memoization (recursion + cache) 🔹 Tabulation (iterative table filling) 2. Space-Saving Strategies 🔹 Rolling arrays: Keep only the last one or two rows (or dimensions) of your DP table. 🔹 Bitsets: Pack small states into bit operations for ultra-fast transitions. 3. Prefix-Sum & Difference Tricks 🔹 Precompute cumulative sums to reduce O(N) transition loops to O(1). 🔹 Use difference arrays for range-update patterns in DP. 4. Monotonic Queue / Sliding Window 🔹 For “min/max over last K states” problems, maintain a deque of candidates in amortized O(1) per update. 5. Bitmask & SOS-DP 🔹 Bitmask DP for subsets of up to ~20 elements (2ⁿ states). 🔹 SOS (Sum Over Subsets) DP to compute functions on all subsets via fast zeta transforms. 6. Segment-Tree-Backed DP 🔹 Use a segment tree (or Fenwick tree) to answer range min/max queries or do range updates on your DP array in O(log N). 🔹 Merge DP states efficiently when you need non-trivial transitions over intervals. 7. 1D/1D (Monge or Quadrangle-Inequality) Optimization 🔹 Targets recurrences of the form dp[i] = min_{0 ≤ j < i} [dp[j] + w(j, i)] where w satisfies the quadrangle (Monge) inequality, so the argmin indices k(i) are non-decreasing. 🔹 Use divide-and-conquer to compute all dp[i] in O(N log N), or Knuth’s optimization to push it to O(N) when stronger conditions hold . 8. Divide-and-Conquer Optimization 🔹 A special case of 1D/1D when optimal split points are monotonic: drop O(N²) down to O(N log N) by recursively solving on segments and narrowing search ranges. 9. Knuth / Quadrangle Inequality 🔹 When cost functions satisfy the quadrangle inequality and boundary conditions, you can reduce range-DP from O(N³) to O(N²) (or even to O(N) in certain forms). 10. Convex Hull Trick & Li Chao Tree 🔹 Optimize linear recurrences of the form dp[i] = min_j [m_j * x_i + b_j] from O(N²) to O(N log N) (or O(N) with a monotonic hull). 11. FFT-Based Convolution 🔹 Use fast polynomial multiplication (FFT) to merge DP steps in O(N log N) instead of O(N²). 12. Matrix Exponentiation / Chain Exponentiation 🔹 Model linear recurrences as dp_vec[i] = M * dp_vec[i−1] Raise the transition matrix M to the nᵗʰ power in O(k³ log n) (or faster) to compute dp[n] in logarithmic time. 13. Berlekamp–Massey Algorithm 🔹 Given the first 2k terms of a sequence, extract its minimal linear recurrence in O(k²). 🔹 Combine with fast exponentiation to compute the nᵗʰ term in O(k² log n), even for very large n. 14. Slope Trick & Aliens’ Tricks 🔹 Handle piecewise-linear DP functions and complex cost updates by maintaining envelopes of slopes. 🔹 Ideal for “add a V-shaped penalty” or “minimize sum of absolute deviations plus a quadratic cost.” Mastering these tools will raise your problem-solving skills, whether you’re in a contest or a interview.

  • View profile for Mark Peters

    Chief Information Officer | AI Infrastructure, Data Center Transformation & IT Operations

    7,681 followers

    𝗛𝗼𝘄 𝘁𝗼 𝗔𝗽𝗽𝗹𝘆 𝗤𝘂𝗮𝗻𝘁𝘂𝗺-𝗜𝗻𝘀𝗽𝗶𝗿𝗲𝗱 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺𝘀 𝘁𝗼 𝗗𝗮𝘁𝗮 𝗖𝗲𝗻𝘁𝗲𝗿 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 (𝗔𝗜𝗢𝗽𝘀 𝗪𝗶𝘁𝗵𝗼𝘂𝘁 𝗮 𝗤𝘂𝗮𝗻𝘁𝘂𝗺 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿) Most leaders hear “quantum” and think of it as experimental, expensive, and years away. That’s a mistake. Quantum-inspired algorithms run on classical infrastructure today and solve the hardest problem you actually have: large-scale optimization under constraints. If you run data centers, this is immediately actionable. What they actually do They convert your environment into an energy minimization problem. Instead of brute forcing every possibility, they rapidly converge on high-quality solutions across massive decision spaces. Think: • Placement • Scheduling • Routing • Thermal balancing • Power allocation Where to apply first (high ROI use cases) 1. Rack and cluster placement Model racks, power domains, cooling zones, and network topology as constraints. Objective: minimize latency + cable length + thermal hotspots. 2. GPU scheduling and utilization: Encode job priority, SLA windows, GPU affinity, and network contention. Objective: maximize utilization while reducing idle burn and queue latency. 3. Thermal + power balancing: Integrate cooling capacity, airflow constraints, and power density. Objective: flatten hotspots without over-provisioning. 4. Network traffic shaping Model east-west traffic flows and oversubscription ratios. Objective: Reduce congestion and packet loss under peak load. How to implement (practical workflow) Step 1: Define variables • Binary: placement decisions, routing paths • Continuous: load, temperature, power draw Step 2: Define constraints • Power caps per rack and row • Cooling limits by zone • Network bandwidth ceilings • SLA requirements Step 3: Build the objective function. Combine into a weighted cost function: • Latency • Energy consumption • Thermal deviation • Resource fragmentation Step 4: Select a solver. Use simulated annealing or related heuristics to explore the solution space efficiently. Step 5: Iterate with real telemetry. Feed in live data: • DCIM • BMS • Scheduler metrics: Continuously refine the model. What “good” looks like • 10–25% improvement in GPU utilization • Lower east-west congestion without network upgrades • Reduced thermal excursions • Faster schedule generation cycles Where most teams fail • Overfitting the model before validating its impact • Ignoring real-time telemetry • Treating this as a one-time optimization instead of a continuous system Bottom line: You don’t need quantum hardware to get quantum-level thinking. You need a structured optimization model and the discipline to iterate it against real operating data. If you’re running >10MW environments and not doing this, you’re leaving efficiency and margin on the table. #DataCenters #AIInfrastructure #GPU #Optimization #HighPerformanceComputing #Cloud #Infrastructure #DigitalTransformation

  • View profile for Ameya D. Jagtap

    Assistant Professor | Scientific Machine Learning Expert | AI4Science | Scientific ML for Real-World Physics

    5,851 followers

    Exciting News to Kick Off 2025!  I'm happy to announce that our latest paper, titled 'Large Language Model-Based Evolutionary Optimizer: Reasoning with Elitism', has been published in Neurocomputing, Elsevier! This work explores the potential of Large Language Models (LLMs) as black-box optimizers, leveraging their remarkable reasoning capabilities for zero-shot optimization across a variety of scenarios, including multi-objective and high-dimensional problems. We introduce Language-Model-Based Evolutionary Optimizer (LEO), a novel, population-based method for numerical optimization. Applications include benchmark challenges and real-world engineering problems like, Supersonic nozzle shape optimization, Heat transfer optimization, Windfarm layout optimization. Key Highlights: 1. Comparable performance to state-of-the-art optimization methods Insights into leveraging LLMs creative potential while addressing challenges like hallucinations. 2. Practical guidelines for reliable optimization using LLMs 3. Limitations and exciting directions for future research A huge thanks to all the collaborators Shuvayan Brahmachary, Subodh Joshi, Kaushic K, Kaushik Koneripalli, Aniruddha Panda, Harshil Patel, PhD, et al.; and the reviewers for their support and feedback! If you're interested in cutting-edge intersections of AI, optimization, and engineering, I invite you to check out the paper: https://lnkd.in/e5hzJwhh Wishing everyone a joyful and prosperous New Year!

  • View profile for Howard Heaton

    Algorithms + Modeling | Educator | Quant | Full-Stack Mathematician

    2,726 followers

    Stop settling for vanilla fixed-point updates. Optimization algorithms can reduce to fixed point steps, and there's a toolbox of options. 🔎 What’s a fixed point? Consider a function T(x) mapping a Euclidean space back onto the Euclidean space. We say a point x* is a fixed point of T if x* = T(x*), i.e. applying T yields what you input. And, these fixed points are precisely the solutions to our optimization problems. 📓 Fixed Point Iterations Here's a few schemes for finding fixed points: 🔹 Banach–Picard This is the vanilla update most people are familiar with where you repeatedly apply T to your current point. Each step shrinks the distance to the fixed point by the same factor. 🔹 Krasnosel'skiĭ-Mann (KM) Take a weighted average of your current point and T(current), gently steering toward a fixed point without overshooting. 🔹 Fast KM Like KM, but also adds a slice of the previous movement to speed up average progress. 🔹 Heavy-Ball Add “momentum” by applying T not to x^k, but to x^k plus a fraction of your last step, giving extra push. 🔹 Halpern At each update, mix in a fixed anchor point u and then apply T, gradually shifting all the weight onto T to get closest solution to u. 🔹 Viscosity Approximation Blend a simple contraction f(x) with T(x) each step—driving iterates toward a fixed point of T that satisfies a variational inequality. 🔹 Ishikawa Do a two-stage move: first mix toward T(x) to get y, then apply T to y and blend back with your original x for extra stability. The assumptions and trajectories of each scheme vary, and so ought be chosen with your application in mind. 🔭 Looking ahead: There are more sophisticated quasi-Newton style schemes like Anderson Acceleration and SuperMann for fixed point iteration. ⸻ ♻️ Learn something new? Repost so others can too. 💬 Questions/feedback are welcome in the comments. 🔔 Never miss a post—get it to your inbox: https://typalacademy.com #algorithms #mathematics #optimization

  • View profile for Jarith Fry

    Business Owner / Automation Expert

    2,003 followers

    Just wrapped up a fun engineering challenge: building a constraint-driven optimization engine for complex axis assignment problems — the kind we bump into all the time in industrial automation, controls, and high-precision motion systems. The idea sounds simple: match a set of movable axes to a set of target positions. The reality: every axis has its own travel limits, allowable region, spacing rules, and mechanical ordering… and every invalid combination needs to be avoided. To keep it clean and rock-solid, the engine does a two-stage approach: 1️⃣ Smart assignment It evaluates feasible permutations, enforces mechanical monotonicity (no crossing), respects per-axis limits, honors pairwise spacing rules, and selects a contiguous block of axes — no “holes” allowed. The best solution wins based on a cost model that favors stable, centered, predictable motion. 2️⃣ Intelligent parking Unused axes are placed safely outside the active region. Then a refinement step nudges those parked positions just enough to satisfy limits, clearances, and spacing rules without disturbing the optimized core. Along the way, the system reports every violation, cost component, and decision path — transparent, debuggable, deterministic. It’s designed with Inductive Automation’s Ignition, Python, real-world PLC constraints, and the kind of control-system edge cases you get from Allen-Bradley, motion rigs, and industrial equipment in mind. Perfect fit for heavy-duty production environments. Really proud of how this one came together — blending optimization theory, practical controls engineering, and real-world mechanical constraints into one clean engine. #IndustrialAutomation #ControlsEngineering #Ignition #InductiveAutomation #PLC #AllenBradley #Optimization #ManufacturingTech #MotionControl

  • View profile for Warren Powell
    Warren Powell Warren Powell is an Influencer

    Professor Emeritus, Princeton University/ Co-Founder, Optimal Dynamics/ Executive-in-Residence Rutgers Business School

    53,287 followers

    Despite my sober analysis of the usefulness of stochastic optimization, there is one class of tools that have emerged that have been *very* valuable ... Stochastic search.   This is a problem that can be written   Min_x E_W F(x,W)   where F(x,W) is some unknown function where we control x, and W represents a set of random inputs that results in a noisy observation of F(x,W). F(x,W) could be a computer simulation, the results of a laboratory experiment, testing a new manufacturing process, or the performance of a drug administered to a patient.    There are two major classes of stochastic search algorithms:   o Derivative-based stochastic search – First paper by Robbins and Monro (1951) o Derivative-free stochastic search – First paper by Box and Wilson (1951)   Both fields of research remain astonishingly active, reflecting the vast range of applications for both. Of particular importance are functions that are a) noisy and b) computationally expensive, which means we have to find good solutions in a small number of iterations.    Derivative-based stochastic search involves relatively simple methods that people implement on their own.    Derivative-free stochastic search is a much richer (and harder) problem class which has been studied under different names, including ranking and selection, multiarmed bandit problems, and Bayesian optimization. The problems arise in both offline and online learning and come in a wide range of variations. Packages exist, but I think many people implement their own algorithms.    Both of these are themselves forms of sequential decision problems. For derivative-free stochastic search, the effect of a decision in one iteration is communicated through the updated *belief* about the function E_W F(x,W). This is an important simplification compared to problems involving the management of resources where we have to capture the physical state variables.  

  • View profile for Xavier Morera

    I help companies turn knowledge into execution with AI-assisted training (increasing revenue) | Lupo.ai Founder | Pluralsight | EO

    8,913 followers

    𝗘𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝗮𝗿𝘆 𝗖𝗼𝗺𝗽𝘂𝘁𝗶𝗻𝗴 𝗖𝗹𝗮𝘀𝘀𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 - 𝗦𝘄𝗮𝗿𝗺 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 Swarm Intelligence Optimization (SO): SO is a class of optimization algorithms inspired by the behavior of social animals, such as birds, ants, and bees. They work by representing candidate solutions as particles and then using interaction and cooperation between these particles to find a better solution. Some of the most widely used families of algorithms in SO include: ▪️1. Ant Colony Optimization (ACO). Ant Colony Optimization is a swarm intelligence optimization algorithm inspired by the foraging behavior of ants. In ACO, candidate solutions to a problem are represented as ants that explore the search space and update a pheromone trail that guides the movement of other ants. The ants update the pheromone trail based on the quality of the solutions they find, with better solutions leading to stronger pheromone trails. ▪️2. Particle Swarm Optimization (PSO). Particle Swarm Optimization is a swarm intelligence optimization algorithm that mimics the behavior of social animals, such as birds and fish, to find optimal solutions to a problem. In PSO, candidate solutions are represented as particles that move and interact with each other in a search space. ▪️3. Artificial Bee Colony (ABC). Artificial Bee Colony is a swarm intelligence optimization algorithm inspired by the foraging behavior of honey bees. In ABC, candidate solutions to a problem are represented as bees that explore the search space and update their positions based on the quality of the solutions they find. A combination of local and global information about the search space guides the movement of the bees. ▪️4. Firefly Algorithm (FA). Firefly Algorithm is a swarm intelligence optimization algorithm inspired by the flashing behavior of fireflies. In FA, candidate solutions to a problem are represented as fireflies that emit light and update their positions based on the quality of the solutions they find. The fireflies' movement is guided by their relative brightness, attracting other fireflies to their location. ▪️5. Cuckoo Search (CS): Cuckoo Search is a swarm intelligence optimization algorithm that is inspired by the egg-laying behavior of cuckoos. In CS, candidate solutions to a problem are represented as cuckoos that lay eggs in nests and update their positions based on the quality of the solutions they find. Basically, all these algorithms are used for optimization, but use different strategies to find the optimal solution. Evolutionary algorithms use a genetic metaphor to evolve a population of candidate solutions, and swarm optimization uses a population of agents that interact with each other. #evolutionarycomputing #swarmintelligence #optimizationalgorithms #computerscience #datamining #technology

  • View profile for Dr. Tim Varelmann

    Reduce Costs & Mistakes through Mathematical Optimization | Production Planning, Energy Consumption & Generation, SCM & Logistics | Author of “Effortless Modeling in Python with GAMSPy”, the world’s first GAMSPy course

    5,082 followers

    How Optimization Algorithms Work – Explained Simply 🔹 Heuristics: Fast & Practical A heuristic is a rule of thumb for decision-making. It doesn’t guarantee the best solution, but it’s often good enough. A strong heuristic has at least one of these qualities: ✅ It finds high-quality solutions quickly. ✅ It delivers decent results with minimal effort. Even if a heuristic fails often, it can still be valuable if it's computationally cheap to try again. 🔹 Local Optimization: Climbing the Wrong Hill Imagine climbing a mountain. You always take steps that increase your altitude. If a step doesn’t go up, you try another direction. Eventually, you’ll reach a peak. Unless there is only one peak, it may not be the highest one. That’s local optimization: great at fine-tuning solutions but often stuck in local optima. Fun fact: Mathematician Gunter Dueck once made a sign error in his algorithm. Instead of always stepping upward, his method allowed tiny downward steps. The result? A world record in solving Traveling Salesman Problems—and a new IBM research department built on this mistake. 🔹 Constraint Programming: Solving Sudoku with Pencil & Rubber Solving a Sudoku puzzle with a pencil and rubber is a great analogy for constraint programming. Imagine a 3×3 box where you need to place the numbers 1, 2, and 3. You start by writing a 1 in the first available cell. Now, only the 2 and 3 remain. You pencil in the 2 in one of the two remaining spots. Then, you check whether the 3 fits in the last empty cell. If it works, great! If not, you erase the 2 and try placing it in the other spot. Still no luck? Then even the 1 was wrong, so you erase that too and start again with a different choice. Constraint programming works the same way: it systematically tries values, corrects mistakes, and efficiently finds valid solutions—just like a Sudoku solver with a good pencil and a well-used rubber. 🔹 Global Optimization: Finding the Highest Peak Efficiently If I wanted to climb Germany’s highest mountain, I wouldn’t start hiking in Münsterland. I’d first take a train to the Alps—there’s no point searching for mountains in flatland. Once in the Alps, I’d only hike on clear days when I can see for kilometers. If I spot a higher peak, I’ll climb it. If there are no taller mountains in sight, I’ll note my altitude and move to a different region. And on cloudy days? I’d relax in the hotel and enjoy Bavarian cuisine. 😉 This is how global optimization works. Instead of blindly searching everywhere, it rules out entire areas (like Münsterland) where the best solution can’t be. Then, it focuses computational effort on the most promising regions—just like hiking only on clear days for maximum visibility. 🔎 Want to optimize your planning, scheduling, or resource allocation? Let’s talk! I help businesses streamline their decision-making using smart optimization techniques. Drop me a message! 🚀

  • View profile for Can Li

    Assistant Professor at Purdue University

    2,765 followers

    🎯 How can we use a low-fidelity optimization model to achieve similar performance to a high-fidelity model? Many decision-making algorithms can be viewed as tuning a low-fidelity model within a high-fidelity simulator to achieve improved performance. A great example comes from Cost Function Approximations (CFAs) by Warren Powell. CFAs embed tunable parameters, such as cost coefficients, into a simplified, deterministic model. These parameters are then refined by optimizing performance in a high-fidelity stochastic simulator, either via derivative-free or gradient-based methods. A similar philosophy appears in optimal control, where controllers are tuned using simulation optimization. ⚙️ Inspired by this paradigm, my student Asha Ramanujam recently developed the PAMSO algorithm. PAMSO—Parametric Autotuning for Multi-Timescale Optimization—tackles complex systems that operate across multiple timescales: High-level decision layer: makes strategic decisions (e.g., planning, design). Low-level decision layer: takes high-level inputs, makes detailed operating decisions (e.g., scheduling), applies detailed constraints and uncertainties, and computes the true objective. However, one-way top-down communication between layers often results in infeasibility or poor solutions due to mismatches between the high-level and the detailed low-level operating models. 💡 PAMSO augments the high-level model with tunable parameters that serve as a proxy for the complex physics and uncertainties embedded in the low-level model. Instead of attempting to jointly solve both levels, we fix the hierarchical structure: the high-level layer makes planning or design decisions, and then passes them down to the low-level scheduling or operational layer, which acts as a high-fidelity simulator. We treat this top-down hierarchy as a black box: The inputs are the tunable parameters embedded in the high-level model. The output is the overall objective value after the low-level simulator evaluates feasibility and performance. By optimizing these parameters using derivative-free methods, PAMSO is able to steer the entire system toward high-quality, feasible solutions. 🚀 Bonus: Transfer Learning! If these parameters are designed to be problem-size invariant, they can be tuned on smaller problem instances and transferred to solve larger-scale problems with minimal extra effort. ⚙️ Case studies demonstrate PAMSO’s scalability and effectiveness in generating good, feasible solutions: ✅ A MINLP model for integrated design and scheduling in a resource-task network with ~67,000 variables ✅ A massive MILP model for integrated planning and scheduling of electrified chemical plants and renewable energy with ~26 million variables Even solving the LP relaxation of these problems is beyond memory limits, and their structure is not easily decomposable for optimization techniques. https://lnkd.in/gDfcvDaZ

Explore categories