Advancements in simulation software and high-performance computing have led to the possibility of new technologies like the Digital Twin which deliver a continuous stream of complex data from simulation models. However, analyses of these data streams using basic Uncertainty Quantification (UQ) methods do not always yield the results sought. The emergence of complex data from these technologies has spawned new analytic methods for extracting pertinent and meaningful information.
Complex data can be a combination of many things. They can have spatial-temporal or transient responses, high-dimensional inputs, a large number of simulation runs, or a mixture of continuous and categorical inputs and outputs. Another data complexity is how to best tune a simulation model with data from physical experiments. Past techniques, such as data matching, fail to determine how close the simulation results are to reality or to quantify the uncertainty from the simulation model.
The training will discuss the many challenges of complex data and introduce advanced UQ methods to solve these challenges. As one of the UQ methods, this training will introduce statistical calibration, a process used to quantify the uncertainties in the simulation model and a means to narrow the gap between the simulation and physical test. The training will walk through the processes of the advanced UQ methods through a series of case studies.
The audience for this training would be engineers, managers, and data scientists involved in or overseeing simulation design and analyses and experimental analyses. Attendees should leave with an understanding of how advanced UQ methods can improve their experimental design and testing and validate their simulation results, how to apply model calibration to their combined simulation and testing environments, and the fundamental value that statistical calibration brings.
Request More Information