Jump to content

Gradient-enhanced kriging: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Jouke1980 (talk | contribs)
Jouke1980 (talk | contribs)
Line 36: Line 36:
In Kriging, the prior covariance matrix <math>P</math> is generated from a covariance function. One example of a covariance function is the Gaussian covariance:
In Kriging, the prior covariance matrix <math>P</math> is generated from a covariance function. One example of a covariance function is the Gaussian covariance:


:<math>P_{ij} = \sigma^2 \mathrm{exp}\left(-\frac{|x_j-x_i|^2}{2 \theta^2}\right)</math>
:<math>P_{ij} = \sigma^2 \mathrm{exp}\left(-\frac{|x_j-x_i|^2}{2 \theta^2}\right)</math>,

where the [[hyperparameter|hyperparameters]] are estimated from a Maximum Likelihood Estimate (MLE).<ref debaar2014 />


=== GEK ===
=== GEK ===

Revision as of 04:29, 8 November 2016

Template:New unreviewed article Gradient-Enhanced Kriging (GEK) is a surrogate modeling technique used in engineering. A surrogate model (alternatively known as a metamodel, response surface or emulator) is a prediction of the output of an expensive computer code. This prediction is based on a small number of evaluations of the expensive computer code.

Introduction

Example of one-dimensional data interpolated by Kriging and GEK. The black line indicates the test-function, while the gray circles indicate 'samples' or evaluations of the test-function. The blue line is the Kriging mean, the shaded blue area illustrates the Kriging standard deviation. With GEK we can add the gradient information, illustrated in red, which increases the accuracy of the prediction.

[1]

Predictor equations

In a Bayesian framework, we use Bayes' Theorem to predict the Kriging mean and variance conditional on the observations. In our case, the observations are the results of a number of computer simulations.

Kriging

Along the lines of [1] [2] , we are interested in the output of our computer simulation, for which we assume the normal prior probability distribution:

,

with prior mean and prior covariance matrix . The observations have the normal likelihood:

,

with the observation matrix and the observation error covariance matrix. After applying Bayes' Theorem we obtain a normally distributed posterior probability distribution, with Kriging mean:

,

and Kriging covariance:

,

where we have the gain matrix:

.

In Kriging, the prior covariance matrix is generated from a covariance function. One example of a covariance function is the Gaussian covariance:

,

where the hyperparameters are estimated from a Maximum Likelihood Estimate (MLE).Cite error: The <ref> tag has too many names (see the help page).

GEK

Example: Drag coefficient of a transonic airfoil

Reference results for the drag coefficient of a transonic airfoil, based on a large number of CFD simulations. The horizontal and vertical axis show the deformation of the shape of the airfoil.
Kriging surrogate model of the drag coefficient of a transonic airfoil. The gray dots indicate the configurations for which the CFD solver was run.
GEK surrogate model of the drag coefficient of a transonic airfoil. The gray dots indicate the configurations for which the CFD solver was run, the arrows indicate the gradients.

References

  1. ^ a b de Baar, J.H.S.; Dwight, R.P.; Bijl, H. (2014). "Improvements to gradient-enhanced Kriging using a Bayesian interpretation". International Journal for Uncertainty Quantification. 4 (3): 205–223.
  2. ^ Wikle, C.K.; Berliner, L.M. (2007). "A Bayesian tutorial for data assimilation". Phys. D: Nonlin. Phenom. 230 (1–2): 1–16.