Establishing how well a numerical simulation represents reality is critical for making simulation results more trustworthy for decision makers. In general, there are two sources of uncertainty that contribute to the disagreement between simulation and physical results, parameter uncertainty and model form uncertainty. SmartUQ has a number of machine learning tools for quantifying these uncertainties as a means of narrowing the simulation – test gap. This webinar will introduce two such SmartUQ tools:
Data Matching, which addresses parameter uncertainty and may be useful in cases where there is limited physical data and/or only response information is available for the physical system.
Bayesian Calibration, which addresses both parameter and model form uncertainty.
From each tool a resulting machine learning model of the simulation can even be used directly for prediction allowing for rapid analyses, that wouldn’t be possible via direct evaluation of the simulation itself.