Establishing how well a numerical simulation represents reality is one way to make simulation results more trustworthy for decision makers. This assessment is usually accomplished by comparing the simulation results to physical data. However, the observed mismatch between the simulation results and the physical test results can blur an engineer’s understanding of how well the simulation represents reality. This presentation will focus on statistical model calibration, a machine learning process used to quantify the uncertainties in the simulation model which provides an understanding of this mismatch and a means to narrow the simulation – physical test gap.
This 90-minute free tutorial will provide an introduction to the basics of statistical calibration. Through a series of problems, the underpinning ideas and benefits of statistical calibration will be illustrated. Detailed instruction and demonstration on the use of SmartUQ’s frequentist and Bayesian calibration tools will be included. Attendees will understand when to use calibration, the different settings and options available within SmartUQ, and how to interpret results. Applications of statistical calibration to digital twins will also be discussed.
This tutorial is interactive. The audience are encouraged to ask questions during the tutorial.