[email protected]

Tutorials

Demonstrating Machine Learning for Narrowing the Simulation-Test Gap

On Demand

Establishing how well a numerical simulation represents reality is one way to make simulation results more trustworthy for decision makers. This assessment is usually accomplished by comparing the simulation results to physical data. However, the observed mismatch between the simulation results and the physical test results can blur an engineer’s understanding of how well the simulation represents reality. This presentation will focus on statistical model calibration, a machine learning process used to quantify the uncertainties in the simulation model which provides an understanding of this mismatch and a means to narrow the simulation – physical test gap.

This 90-minute free tutorial will provide an introduction to the basics of statistical calibration. Through a series of problems, the underpinning ideas and benefits of statistical calibration will be illustrated. Detailed instruction and demonstration on the use of SmartUQ’s frequentist and Bayesian calibration tools will be included. Attendees will understand when to use calibration, the different settings and options available within SmartUQ, and how to interpret results. Applications of statistical calibration to digital twins will also be discussed.

This tutorial is interactive. The audience are encouraged to ask questions during the tutorial.


Presented by Gavin Jones, Principal Application Engineer
Gavin Jones serves as a Principal Application Engineer at SmartUQ, where he is responsible for performing simulation and AI work for clients in the automotive, aerospace, defense, semiconductor, and other industries. He is a member of the SAE Chassis Committee as well as the AIAA Digital Engineering Integration Committee. Gavin is also a key contributor in SmartUQ’s Digital Twin/Digital Thread initiative.