Bayesian Analysis in Engineering

Explore top LinkedIn content from expert professionals.

Summary

Bayesian analysis in engineering is a statistical approach that uses probability models to update predictions and parameter estimates as new data becomes available, making it ideal for handling uncertainty and incomplete information. This method helps engineers make informed decisions about material properties, risk management, and system behaviors by combining prior expert knowledge with experimental results.

  • Embrace uncertainty: Use Bayesian methods to quantify and update uncertainty in engineering parameters, especially when data is limited or conditions are changing.
  • Combine expert insights: Integrate expert judgment and relevant precedents with new evidence to improve predictions and risk assessments in complex engineering scenarios.
  • Advance material modeling: Apply Bayesian analysis and neural networks to predict material responses and durability, allowing for more reliable design and maintenance planning.
Summarized by AI based on LinkedIn member posts
  • View profile for Marcelo Llano

    Principal Geotechnical Engineer

    5,540 followers

    If you work with the NorSand constitutive model, you know that calibrating elasticity and hardening parameters via visual fitting can be a tedious and subjective task. In our new paper, we propose a better way: a Bayesian approach that integrates experimental triaxial data with predefined priors to objectively estimate the most likely parameter values. This method removes the bias of manual fitting and quantifies the uncertainty in your parameters. We validated the methodology using Fraser River sand and have released the Python code so you can try it on your own datasets. 🔗 Paper: https://lnkd.in/eub3ShpC 🔗 GitHub Repository: https://lnkd.in/edshx9v9 #Geotechnics #ConstitutiveModeling #DataScience #Engineering #Mining Thanks to all the co-authors: Luis-Fernando, Humberto and Alexandra SRK Consulting Geosyntec Consultants Red Earth Engineering A Geosyntec Company The University of Western Australia

  • A recurring challenge across science & engineering: you need to align a computationally expensive black-box simulator (PDEs, etc.) to data in order to infer hidden parameters like material coefficients or boundary conditions. In many such cases, you don't have access to gradients, adjoints, etc. If you only want point estimates, then Bayesian optimisation (BO) is an option. But if you care about the full posterior distribution, Monte Carlo or MCMC quickly become infeasible. You could fall back on Laplace approximations, but for most PDE-based inverse problems the posteriors are horrible: multimodal, non-identifiable, with tangled geometries, reflecting sensitivity scales and invariances. ABC is an option: but this typically requires huge amounts of evaluations, and has a tendency to inflate posteriors. So the homework question was: just as BO uses Gaussian Process surrogates and acquisition strategies to explore costly functions, can we design sampling strategies the same way, to approximate a posterior under a fixed compute budget? With the brilliant Takuo Matsubara, Simon Cotter, and Konstantinos Zygalakis, we introduce Bandit Importance Sampling (BIS): • A new class of importance sampling that designs samples directly via multi-armed bandits. • Combines space-filling sequences (Halton, QMC) with GP surrogates to adaptively focus where evaluations matter most. • Comes with theoretical guarantees and works well on multimodal, heavy-tailed, and real-world Bayesian inference problems.     Takeaway: BIS works well: it can cut evaluations by orders of magnitude. For problems with ~10–20 parameters, it’s a very viable option. Preprint here: https://lnkd.in/egrZX_NJ Next steps: packaging this up for the community.

  • View profile for Stefan Hunziker, PhD

    Professor of Risk Management | Prof. Dr. habil.

    12,576 followers

    No Two Ways About It: Why Bayesian Thinking Is Non-Negotiable in Risk Management   There are two ways to quantify risk: Frequentist methods derive probability from past frequencies; they work well when data are plentiful and conditions are stable, but they can be misleading when data are limited or the environment shifts. Bayesian methods view probability as your current, evidence-based belief, starting with a reasonable baseline from base rates and expert judgment, then updating it as new data arrives.   How many risks in your portfolio have enough relevant data to quantify easily? Plenty for financial exposures, but far fewer for operational risks, and almost none for strategic risks. When data is limited, “data-only” estimates can be dangerously misleading, and the most critical risks may go unassessed. That’s why risk management must rely on Bayesian principles.   Consider a first-of-its-kind risk, such as a regulatory change that could force your company to shut down. You face a choice: complain that this risk can’t be assessed due to missing data, or use Bayesian methods, which can start with expert judgment and relevant precedents from comparable jurisdictions. Suppose a 5-10% probability within 12 months and a P90 loss of $280-380 million. Formally, the 5-10% baseline is your prior; new information shifts it to a posterior of 20-30%, which you report as a range. Suddenly, a new public signal on “data sovereignty” arrives; your belief updates to 20-30% and P90 increases, breaching your risk appetite. Once lawmakers pause the proposal, the probability falls to 10-15%. There’s no frequentist approach here; only Bayesian estimation that updates with each new clue can reduce uncertainty.   The same applies to strategic risks: For instance, initially, you have 3 options for entering the market: 1. acquiring a target, 2. forming a joint venture, or 3. building from scratch. Your initial estimate suggests a 20-30% probability you won’t reach break-even within 24 months, with a P90 loss of CHF 9–12 million. Early assessments rule out the acquisition. With one option eliminated, the probabilities for the remaining two options increase: the failure risk rises to 30-40%, and the P90 loss to CHF 12-16 million, exceeding your risk appetite (set at CHF 12 million). You don’t need more data: you can run a two-region joint venture pilot, add an exit clause, and invest CHF 1.2 million in targeted branding. A quarter later, the pilot performs well; the failure risk drops to 20-26%, and P90 loss to CHF 11-12 million, bringing it back within your risk appetite.   Most critical risks are inherently Bayesian. If your risk report misses quantified risks due to insufficient data, replace them with “living probabilities” that learn from updated data. Without Bayesian thinking, risk management won’t be effective, and your risk portfolio will likely be incomplete. Institut für Finanzdienstleistungen Zug IFZ Lucerne University of Applied Sciences and Arts

  • View profile for Nima Khodadadi

    University of California, Berkeley

    4,858 followers

    I am delighted to share that our latest research has been published in the ASCE Journal of Materials in Civil Engineering! Title: Probabilistic Assessment of Chloride Ion Migration in Concrete: Enhancing Predictive Accuracy and Gaining Insights into Distribution Patterns. This study addresses chloride ion ingress, one of the primary causes of reinforced concrete deterioration and reduced service life. We developed a Bayesian-adjusted probabilistic model that more accurately predicts chloride distribution under different conditions of temperature, humidity, and water–cement ratio. Experimental validation confirmed that the model significantly improves predictive accuracy, with all test results falling within the predicted confidence intervals. The findings provide a stronger scientific basis for durability assessment and offer valuable guidance for extending the service life of reinforced concrete structures. I am grateful to the entire team for their contributions and collaboration, and I extend my special thanks to Professor Antonio Nanni for his unparalleled support and guidance throughout this work. Stay Tuned... there will be more..... Link: https://lnkd.in/ePinaUZ8 یا پدر من

  • View profile for Michael Shields

    Professor at The Johns Hopkins University; President and Principal Engineer at UQuant, Inc.

    8,272 followers

    I'm happy to share our latest paper published in Computer Methods in Applied Mechanics and Engineering entitled "Bayesian neural networks for predicting uncertainty in full-field material response." Most uncertainty quantification methods predict uncertainty in low-dimensional quantities of interest derived from a more complicated model. In this work, we use Bayesian convolutional neural networks to predict uncertainty in the full-field mechanical response of heterogeneous materials under specified loading conditions. The neural network predicts the stress in the material at each discretized point in space, along with a measure of uncertainty in the prediction at each point. We compare three different strategies for Bayesian neural network implementation: Hamiltonian Monte Carlo (HMC), the variational Bayes by Backprop (BBB) algorithm, and Monte Carlo dropout. The results show that HMC can be computationally tractable, even for some large, high-dimensional problems while variational BBB provides robust uncertainty estimates that are largely consistent with HMC. Monte Carlo dropout, meanwhile, shows inconsistencies with the other methods and is very sensitive to its design parameters. For more details, see the paper below: https://lnkd.in/eh--Zh3r The credit for this work goes to our awesome postdoc, George Pasparakis, and my incredible collaborator Lori Graham-Brady. This work has been supported under the Center for High-Throughput Materials Discovery for Extreme, funded by the U.S. Army DEVCOM Army Research Laboratory. We are grateful for their support! #bayesian #networknetwork #computationalmechanics Johns Hopkins Whiting School of Engineering Johns Hopkins Department of Civil and Systems Engineering Hopkins Extreme Materials Institute at Johns Hopkins University

Explore categories