Simulation and Data Analysis in Engineering

Explore top LinkedIn content from expert professionals.

Summary

Simulation and data analysis in engineering combine computer modeling with data-driven insights to predict how systems behave, spot potential problems, and improve designs without relying solely on real-world testing. By using virtual models alongside real measurements, engineers can understand complex systems more fully, make smarter decisions, and reduce risks before changes are made in the real world.

  • Question assumptions: Regularly check the assumptions built into your models by comparing simulation results with real-world measurements to catch hidden errors early.
  • Use data together: Blend design models with live sensor data to create a feedback loop, so lessons learned during operation can improve both current performance and future designs.
  • Explore scenarios: Test a wide range of “what if” situations virtually, using simulations like Monte Carlo or discrete event modeling, to understand possible outcomes and better manage uncertainty and variability.
Summarized by AI based on LinkedIn member posts
  • View profile for Yuval H.

    Leading Application Engineering with expertise in Digital Strategy.

    9,159 followers

    When did we start trusting assumptions more than measurements? Simulation has transformed engineering. We can model complex systems, iterate designs quickly, and explore conditions that would be difficult or expensive to test. But every simulation begins with assumptions. Material properties are simplified. Boundary conditions are estimated. Interfaces are idealized. And sometimes, those small assumptions quietly evolve into large errors. That is where measurement brings us back to reality. In the following application, engineers needed to understand how a piston head truly behaves under load. Not in theory, but in practice. The piston head carries the full impact of combustion, influencing performance, efficiency, durability, and emissions. There is no margin for uncertainty. Strain gage sensors were installed directly on the piston head, and the assembly was placed into a simulated cylinder head. Instead of firing the engine, controlled air pressure was used to replicate loading conditions seen in high performance operation. This approach revealed something important. Even without extreme temperature effects, the measured strain data provided immediate insight into how the structure responded. It allowed engineers to validate their FEA model early, identify discrepancies, and refine the design before moving into more complex and costly testing. Testing did not replace simulation. It grounded it. Because in the end, the goal is not just to predict behavior. It is to understand it. And that understanding starts with measurement.

  • View profile for David Roop

    Vice President, Power Systems Engineering

    4,396 followers

    Do you understand the "why" from the engineering simulations you conduct? My greatest technical mentors always instilled a strong foundation in first principles, or "the first basis from which a thing is known." Unless you already know the answer (know what to expect or how to interpret what you see), how can you confirm the result validity from a simulation? When dealing with complex, multi-variable systems, I think of this as the "why" of what is being observed. With the increasing penetration of power electronic converters and the rapid change of the power system, the need for modeling and simulation engineers continues to increase. Access to modeling tools has made everyone capable of installing software and appearing as an expert. The most talented engineers I know are lifelong learners. I'd encourage everyone to keep their sense of curiosity. Ask questions and strive for a deeper understanding of what the simulation tools seem to be telling us. Don't assume the output is correct, or even the input.   We should always employ good engineering judgement and make ethical decisions about how we treat assumptions. Situations such as those in the comment below likely occur to make a simulation result look "nicer" or are misunderstandings of physics. A poor looking result doesn't always mean an inaccurate result. When best practices and sound engineering judgement have been used, care should be taken when making adjustments to ensure validity. Otherwise, it's not the simulation result that's in error, but the input data or assumptions (think garbage in equals garbage out). Loading input files and hitting run is not a valid approach to performing a technical study. Similar to the mantra of "measure twice and cut once", the testing and evaluation of sub-systems or components should ensure the quality of what's being used for the task at hand (be it harmonic analysis, dynamic stability, small-signal stability, or any other analysis type). Take your time to understand or run sensitivities to determine root cause of behavior observed, know how things are modeled and “why”, then research the control theory, electromagnetic response, or physical machine properties. Ask questions to those who can help shed some light on what's being observed.   Keeping our focus on the "why" will continue to make for more informed decisions and incredible engineers. #PowerSystems #PowerElectronics #ControlSystems #Modeling #SystemStudies #RenewableEnergy

  • View profile for Krish Sengottaiyan

    Senior Advanced Manufacturing Engineering Leader | Pilot-to-Production Ramp | Industrial Engineering | Large-Scale Program Execution| Thought Leader & Mentor |

    29,598 followers

    𝙒𝙝𝙖𝙩 𝙞𝙛 𝙮𝙤𝙪𝙧 𝙛𝙖𝙘𝙩𝙤𝙧𝙮 𝙤𝙣𝙡𝙮 𝙬𝙤𝙧𝙠𝙨 𝙗𝙚𝙘𝙖𝙪𝙨𝙚 𝙧𝙚𝙖𝙡𝙞𝙩𝙮 𝙝𝙖𝙨𝙣’𝙩 𝙩𝙚𝙨𝙩𝙚𝙙 𝙞𝙩 𝙮𝙚𝙩? Most plants look stable— until demand shifts, a resource slips, or variability shows up where no one expected it. That’s when leaders realize the system wasn’t designed for reality. It was designed for assumptions. This is why simulation-based decision making—especially Discrete Event Simulation (DES)—has become essential for smart plants. Not to predict the future. But to stress-test the system before the system is forced to respond. Here’s what DES actually validates—end to end: 1️⃣ 𝙋𝙧𝙤𝙘𝙚𝙨𝙨 𝙁𝙡𝙤𝙬 𝙊𝙥𝙩𝙞𝙢𝙞𝙯𝙖𝙩𝙞𝙤𝙣 DES shows how material and information truly move—not how the routing sheet claims they do. 2️⃣ 𝙀𝙦𝙪𝙞𝙥𝙢𝙚𝙣𝙩 𝙐𝙩𝙞𝙡𝙞𝙯𝙖𝙩𝙞𝙤𝙣 𝘼𝙣𝙖𝙡𝙮𝙨𝙞𝙨 High utilization can hide starvation and blocking. DES exposes when assets look busy but flow is unhealthy. 3️⃣ 𝘽𝙤𝙩𝙩𝙡𝙚𝙣𝙚𝙘𝙠 𝙄𝙙𝙚𝙣𝙩𝙞𝙛𝙞𝙘𝙖𝙩𝙞𝙤𝙣 Constraints aren’t static. DES reveals where the bottleneck migrates under different conditions. 4️⃣ 𝙋𝙧𝙤𝙙𝙪𝙘𝙩𝙞𝙤𝙣 𝘾𝙖𝙥𝙖𝙘𝙞𝙩𝙮 𝙋𝙡𝙖𝙣𝙣𝙞𝙣𝙜 Capacity isn’t a fixed number. DES models how throughput behaves under variability, downtime, and mix changes. 5️⃣ 𝘽𝙪𝙛𝙛𝙚𝙧 𝙎𝙞𝙯𝙞𝙣𝙜 Too much buffer masks instability. Too little amplifies it. DES finds the point where flow stays resilient. 6️⃣ 𝘾𝙮𝙘𝙡𝙚 𝙏𝙞𝙢𝙚 𝘿𝙞𝙨𝙩𝙧𝙞𝙗𝙪𝙩𝙞𝙤𝙣 Averages lie. DES reveals the spread—and where volatility is introduced. 7️⃣ 𝙍𝙚𝙨𝙤𝙪𝙧𝙘𝙚 𝘼𝙡𝙡𝙤𝙘𝙖𝙩𝙞𝙤𝙣 People, machines, and automation interact as a system. DES tests the balance before locking it in. 8️⃣ 𝘿𝙚𝙢𝙖𝙣𝙙 𝙁𝙡𝙤𝙬 𝙊𝙥𝙩𝙞𝙢𝙞𝙯𝙖𝙩𝙞𝙤𝙣 DES connects demand patterns to execution reality—without overloading the system. 9️⃣ 𝙏𝙧𝙞𝙖𝙡 𝘽𝙪𝙞𝙡𝙙 𝙎𝙘𝙚𝙣𝙖𝙧𝙞𝙤 𝘼𝙣𝙖𝙡𝙮𝙨𝙞𝙨 Instead of learning after launch, DES lets teams explore “what if” scenarios before they become problems. 🔟 𝘿𝙖𝙩𝙖-𝘿𝙧𝙞𝙫𝙚𝙣 𝙄𝙣𝙫𝙚𝙨𝙩𝙢𝙚𝙣𝙩 𝘿𝙚𝙘𝙞𝙨𝙞𝙤𝙣𝙨 Every capex decision is validated against system behavior—not isolated ROI logic. This is the real shift leaders are making: 𝙁𝙧𝙤𝙢 𝙩𝙧𝙞𝙖𝙡 𝙗𝙪𝙞𝙡𝙙𝙨 → 𝙩𝙤 𝙫𝙖𝙡𝙞𝙙𝙖𝙩𝙚𝙙 𝙨𝙘𝙚𝙣𝙖𝙧𝙞𝙤𝙨 𝙁𝙧𝙤𝙢 𝙤𝙥𝙞𝙣𝙞𝙤𝙣𝙨 → 𝙩𝙤 𝙚𝙫𝙞𝙙𝙚𝙣𝙘𝙚 𝙁𝙧𝙤𝙢 𝙛𝙞𝙧𝙚𝙛𝙞𝙜𝙝𝙩𝙞𝙣𝙜 → 𝙩𝙤 𝙙𝙚𝙨𝙞𝙜𝙣𝙚𝙙 𝙨𝙩𝙖𝙗𝙞𝙡𝙞𝙩𝙮 Simulation doesn’t improve factories. It reveals whether the system was ever ready. 𝙄𝙛 𝙮𝙤𝙪’𝙧𝙚 𝙨𝙘𝙖𝙡𝙞𝙣𝙜 𝙥𝙧𝙤𝙙𝙪𝙘𝙩𝙞𝙤𝙣, 𝙞𝙣𝙩𝙧𝙤𝙙𝙪𝙘𝙞𝙣𝙜 𝙖𝙪𝙩𝙤𝙢𝙖𝙩𝙞𝙤𝙣, 𝙤𝙧 𝙧𝙚𝙗𝙖𝙡𝙖𝙣𝙘𝙞𝙣𝙜 𝙘𝙖𝙥𝙖𝙘𝙞𝙩𝙮— 𝙩𝙝𝙚 𝙦𝙪𝙚𝙨𝙩𝙞𝙤𝙣 𝙞𝙨𝙣’𝙩 𝙘𝙖𝙣 𝙩𝙝𝙚 𝙡𝙞𝙣𝙚 𝙧𝙪𝙣?

  • View profile for Semion Gengrinovich

    Director, Reliability Engineering & Field Analytics

    6,473 followers

    Most engineering and business forecasts still rely on single-number estimates: one MTBF, one warranty-return rate, one “expected” portfolio return. Monte Carlo simulation flips that mindset by treating every key input as a distribution instead of a constant, then running thousands of virtual futures to see the full range of possible outcomes. Instead of asking “what will happen,” you start asking “what is the probability that we hit our reliability target or our financial goal under realistic variability and uncertainty.” For reliability engineers and decision makers, this becomes a virtual test lab and a virtual market at the same time. You can combine ALT or run-to-failure data, usage variability, and stress profiles to project field failures, while also modeling revenue, cost, or portfolio risk using the same framework. The result is a more honest conversation with stakeholders, framed in probabilities and risk envelopes instead of optimistic point estimates.

  • View profile for Brent Roberts

    VP Growth Strategy, Siemens Software | Industrial AI & Digital Twins | Empowering industrial leaders to accelerate innovation, slash downtime & optimize supply chains.

    8,449 followers

    If you run service and maintenance, you’re managing a moving system, not a checklist. The energy transition multiplies this complexity: assets interact across electricity, heat, fuels, storage, and conversion. That means troubleshooting can’t stop at the asset level. It has to read the system.     Here’s what’s working: bring design models and operational data into one living view. The material highlights this shift clearly with the digital twin and executable digital twin. Simulation models built during design are extended into operations, learning from sensor inputs to predict issues before they become outages. In practice, that looks like predicting turbine blade stress with only a few physical sensors, or using hybrid multiphase CFD to qualify equipment performance before deployment so field testing isn’t the first test.     This approach addresses the energy trilemma with day-to-day control. Affordability and access through higher efficiency and fewer truck rolls. Security through better visibility across critical parameters and faster root-cause analysis. Sustainability through tuned combustion, smarter storage, and cleaner fuel blends. It’s not new tech for tech’s sake. It’s a single source of truth that lets teams see cause and effect across engineering, production, and service.     One takeaway you can apply now: standardize a closed-loop workflow between engineering and ops. Reuse design models, connect real-time sensor data, and track changes in one place. If maintenance finds a recurring issue, feed it back into the model, simulate fixes, then roll the approved settings to the field. Over time, the system gets easier to run, not harder.     If you’re balancing safety, cost, and sustainability targets, and want system performance you can trust, let’s compare notes on how you’re closing the loop between design and operations. 

  • View profile for Rajat Walia

    Senior Aerodynamics Engineer @ Mercedes-Benz | CFD | Thermal | Aero-Thermal | Computational Fluid Dynamics | Valeo | Formula Student

    118,194 followers

    AI/ML for Engineers – Learning Pathway, Part 2 (Datasets, Code, Projects & Libraries for CAE & Simulation) If you're a mechanical or aerospace engineer diving into ML, you’ve probably realized this: There's no shortage of ML tutorials but very few tailored to simulation, CFD, or physics-based modeling. This second part of Justin Hodges, PhD's blog fills that gap. In the blog, you will find: ➡️ Which datasets actually matter in CAE applications. ➡️ Beginner-friendly vs. advanced datasets for meaningful projects. Links to real engineering data like: ➡️ AhmedML, WindsorML, DrivaerML (31TB of aero simulation data) ➡️ NASA Turbulence Modeling Challenge Cases (with goals for ML-based prediction) ➡️ Johns Hopkins Turbulence Databases ➡️ Stanford CTR DNS datasets, MegaFlow2D, Vreman Research, and more He also points to coding libraries, open-source projects, and suggestions for portfolio-building Especially helpful if you're not publishing papers or attending conferences. Read the full blog here: https://lnkd.in/ggT72HiC Image Source: A Python learning roadmap suggested by Maksym Kalaidov 🇺🇦 in CAE applications! He is a great expert to follow in the space of ML surrogates for engineering simulation. #mechanical #aerospace #automotive #cfd #machinelearning #datascience #ai #ml

  • View profile for Ahmadreza Mohammad Sharifi

    Interested in Finite Element Analysis (FEA) with Abaqus CAE, Fortran Subroutine, and Python Script.

    12,963 followers

    Why Might Finite Element Simulation Results in Composite Materials Differ from Experimental Tests? 🤔 It's common to encounter discrepancies when comparing finite element simulations with experimental results for composite materials. Here are some key considerations and tips to help you reduce errors and achieve a closer match! 🔍✨ 🧩 Understand Material Behavior Inputs The accuracy of your simulation heavily relies on the material behavior data you input. To avoid discrepancies, ensure your input data reflects the actual experimental conditions and material properties. 📄 Avoid Reliance on Published Data Alone Values found in articles might not match your exact material and experimental conditions. If possible, get material properties specific to your test setup rather than relying solely on literature values. 🧪 Align Experimental Conditions Differences in curing processes, volume fractions, or manufacturing techniques can create variations in material behavior. Try to align these factors between your simulation and experimental tests to ensure comparable results. 🌡️ Factor in Environmental Conditions External conditions like temperature and humidity can impact the material response. Ensure your simulation environment mirrors the actual testing environment for better correlation. 🛠️ Calibrate & Validate Start by simulating simpler tests to validate your material model. Use these tests to calibrate parameters, making sure the simulated results match the experimental data closely before moving on to complex simulations. 🚨 Other Common FEM Mistakes to Watch Out For: Mesh Sensitivity Analysis Boundary Conditions Convergence and Iterations Contact & Interaction #FEMTips #CompositeSimulation #FiniteElementAnalysis #SimulationAccuracy #MeshSensitivity #BoundaryConditions #Engineering #Composite #FEM #Abaqus

  • View profile for Santi Adavani

    AI Systems for the Physical World

    6,123 followers

    🔬 Engineering design synthesis is moving from manual iteration to automated, data-driven approaches. MIT researchers map out how deep generative models are enabling this shift, with important technical implications for how we develop products in the reference paper below. The paper provides a systematic analysis across: 🎯 Design Problems: • Topology optimization • Materials & microstructure design • 2D/3D shape synthesis • Multi-component product design 💾 Data Representations: • Voxels & point clouds for 3D • Images for 2D designs • Parametric specs for manufacturing • Graphs for component relationships 🧮 Model Architectures: • GANs with various conditioning approaches • VAEs for latent space exploration • RL for sequential design decisions • Integration with physics-based simulation ⚖️ Loss Functions: • Performance metrics from simulation • Manufacturability constraints • Style transfer for design aesthetics • Multi-objective optimization 📊 Key Datasets: • UIUC airfoil database • ShapeNet/ModelNet for 3D shapes • BIKED bicycle design dataset • Material microstructure collections 📝 Reference: "Deep Generative Models in Engineering Design: A Review" by Regenwetter et al. https://lnkd.in/g_mMR-8y S2 Labs #EngineeringDesign #MachineLearning #TechnicalResearch

  • View profile for Adam DeJans Jr.

    Decision Intelligence | Author | Executive Advisor

    25,020 followers

    In simulation optimization, especially within complex systems, estimating the performance of a design under uncertainty is critical. This is where discrete event simulation comes into play. For a given design, we simulate different scenarios, capturing various outcomes to understand how well our design performs. The average result of these simulations, gives us a solid estimate of the design’s effectiveness. 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐚𝐥 𝐀𝐝𝐯𝐢𝐜𝐞: ✅ Start with a Clear Objective: Define what you want to optimize; be it cost, efficiency, or customer satisfaction. Knowing your end goal helps tailor the simulation scenarios effectively. ✅ Run Enough Simulations: To capture the variability of real-world conditions, ensure you run a sufficient number of simulations. More runs give a more accurate estimate of quality, but balance it with computational resources. ✅ Analyze and Iterate: Use the results to identify areas of improvement. If certain designs perform poorly under specific conditions, use that insight to refine and test again. Continuous iteration helps in honing the optimal solution. ✅ Leverage Software Tools: Utilize simulation tools that can handle complex event-driven processes. This can save time and provide more detailed insights. By following these steps, you can make your optimization efforts robust and reliable. Simulation-based optimization is more than just a tool; it’s a way to drive better, data-informed decisions in uncertain environments. I've attached a handwritten version of the formula! (On a sticky note to honor Dr. Kruti Lehenbauer 🙇♂️) What simulation tools have you used? What do you see as the tradeoffs between them? #SimulationOptimization #DiscreteEventSimulation #DecisionMaking

  • View profile for Ayman ElFouly

    Senior Engineering Consultant at Applied Science International, LLC - ASI

    10,710 followers

    Bridging Digital Twins with Extreme Loading for Structures (ELS) for Advanced Structural Analysis 🌍🏗️ In today's rapidly evolving engineering landscape, Digital Twins are transforming how we analyze, monitor, and predict structural behavior. When paired with Extreme Loading for Structures (ELS) software, we unlock a powerful synergy that enables engineers to simulate real-world structural responses with unprecedented accuracy. 🔹 Why Integrate Digital Twins with ELS? ✅ Real-Time Structural Assessment – Digital Twins provide continuous updates on structural conditions, while ELS simulates extreme scenarios like blast loads, progressive collapse, and seismic events. ✅ Enhanced Predictive Maintenance – Combining real-world data with nonlinear structural analysis allows engineers to predict failures before they occur, optimizing maintenance and reducing costs. ✅ Better Decision-Making – Engineers, insurers, and risk managers can visualize potential damage in complex structures and infrastructure, improving safety and resilience. ✅ Cost-Effective Design Optimization – ELS helps refine structural designs by testing "what-if" scenarios in a virtual environment, ensuring performance under extreme conditions. By merging Digital Twin technology with ELS, we step into the future of structural engineering—where we don’t just react to failures but predict and prevent them. 🚀 How do you see Digital Twins shaping the future of structural analysis? Let’s discuss! ⬇️ #DigitalTwin #ExtremeLoading #StructuralEngineering #Resilience #Simulation #SmartInfrastructure

Explore categories