Title: | Particle Learning of Gaussian Processes |
---|---|
Description: | Sequential Monte Carlo (SMC) inference for fully Bayesian Gaussian process (GP) regression and classification models by particle learning (PL) following Gramacy & Polson (2011) <arXiv:0909.5262>. The sequential nature of inference and the active learning (AL) hooks provided facilitate thrifty sequential design (by entropy) and optimization (by improvement) for classification and regression models, respectively. This package essentially provides a generic PL interface, and functions (arguments to the interface) which implement the GP models and AL heuristics. Functions for a special, linked, regression/classification GP model and an integrated expected conditional improvement (IECI) statistic provide for optimization in the presence of unknown constraints. Separable and isotropic Gaussian, and single-index correlation functions are supported. See the examples section of ?plgp and demo(package="plgp") for an index of demos. |
Authors: | Robert B. Gramacy <[email protected]> |
Maintainer: | Robert B. Gramacy <[email protected]> |
License: | LGPL |
Version: | 1.1-12 |
Built: | 2024-11-07 02:55:38 UTC |
Source: | https://github.com/cran/plgp |
Sequential Monte Carlo inference for fully Bayesian Gaussian process (GP) regression and classification models by particle learning (PL). The sequential nature of inference and the active learning (AL) hooks provided facilitate thrifty sequential design (by entropy) and optimization (by improvement) for classification and regression models, respectively. This package essentially provides a generic PL interface, and functions (arguments to the interface) which implement the GP models and AL heuristics. Functions for a special, linked, regression/classification GP model and an integrated expected conditional improvement (IECI) statistic is provides for optimization in the presence of unknown constraints. Separable and isotropic Gaussian, and single-index correlation functions are supported. See the examples section of ?plgp and demo(package="plgp") for an index of demos
For a fuller overview including a complete list of functions, and
demos, please use help(package="plgp")
.
Robert B. Gramacy [email protected]
Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262
Gramacy, R. and Lee, H. (2010). “Optimization under unknown constraints”. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press
Carvalho, C., Johannes, M., Lopes, H., and Polson, N. (2008). “Particle Learning and Smoothing”. Discussion Paper 2008-32, Duke University Dept. of Statistical Science.
Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/
https://bobby.gramacy.com/r_packages/plgp/
PL
, tgp
Add sufficient
data common to all particles to the global pall
variable, a mnemonic for “particles-all”, for
Gaussian process (GP)
regression, classification, or combined unknown constraint
models
addpall.GP(Z) addpall.CGP(Z) addpall.ConstGP(Z)
addpall.GP(Z) addpall.CGP(Z) addpall.ConstGP(Z)
Z |
new observation(s) (usually the next one in “time”) to add to
the |
All three functions add new Z$x
to pall$X
;
addpall.GP
also adds Z$y
to pall$Y
,
addpall.CGP
also adds Z$c
to pall$Y
,
and addpall.ConstGP
does both
nothing is returned, but global variables are modified
Robert B. Gramacy, [email protected]
Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262
Gramacy, R. and Lee, H. (2010). “Optimization under unknown constraints”. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press
Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/
https://bobby.gramacy.com/r_packages/plgp/
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
Functions to supply data to PL for Gaussian process (GP) regression, classification, or combined unknown constraint models
data.GP(begin, end = NULL, X, Y) data.GP.improv(begin, end = NULL, f, rect, prior, adapt = ei.adapt, cands = 40, save = TRUE, oracle = TRUE, verb = 2, interp = interp.loess) data.CGP(begin, end = NULL, X, C) data.CGP.adapt(begin, end = NULL, f, rect, prior, cands = 40, verb = 2, interp=interp.loess) data.ConstGP(begin, end = NULL, X, Y, C) data.ConstGP.improv(begin, end = NULL, f, rect, prior, adapt = ieci.const.adapt , cands = 40, save = TRUE, oracle = TRUE, verb = 2, interp = interp.loess)
data.GP(begin, end = NULL, X, Y) data.GP.improv(begin, end = NULL, f, rect, prior, adapt = ei.adapt, cands = 40, save = TRUE, oracle = TRUE, verb = 2, interp = interp.loess) data.CGP(begin, end = NULL, X, C) data.CGP.adapt(begin, end = NULL, f, rect, prior, cands = 40, verb = 2, interp=interp.loess) data.ConstGP(begin, end = NULL, X, Y, C) data.ConstGP.improv(begin, end = NULL, f, rect, prior, adapt = ieci.const.adapt , cands = 40, save = TRUE, oracle = TRUE, verb = 2, interp = interp.loess)
begin |
positive |
end |
positive |
X |
|
Y |
vector of length at least |
C |
vector of length at least |
f |
function returning a responses when called as |
rect |
bounding rectangle for the inputs |
prior |
prior parameters passed from |
adapt |
function that evaluates a sequential design criterion on
some candidate locations; the default |
cands |
number of Latin Hypercube candidate locations used to choose the next adaptively sampled input design point |
save |
scalar |
oracle |
scalar |
verb |
verbosity level for printing the progress of improv and other adaptive sampling calculations |
interp |
function for smoothing of 2-d image plots. The default comes
from |
These functions provide data to PL for Gaussian progress regression
and classification methods in a variety of ways. The simplest,
data.GP
and data.CGP
supply pre-recorded regression and
classification data stored in data frames and vectors;
data.ConstGP
is a hybrid that does joint regression and
classification. The other
functions provide data by active learning/sequential design:
The data.GP.improv
function uses expected improvement (EI);
data.CGP.improv
uses predictive entropy;
data.ConstGP.improv
uses integrated expected conditional improvement (IECI). In these
cases, once the x
-location(s) is/are chosen,
the function f
is used to provide the response(s)
The output are vectors or data.frame
s.
Robert B. Gramacy, [email protected]
Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262
Gramacy, R. and Lee, H. (2010). “Optimization under unknown constraints”. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press
Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/
https://bobby.gramacy.com/r_packages/plgp/
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
Functions for using Metropolis-Hastings (MH) to evolve a particle according to the posterior distribution given by a Gaussian process (GP) for regression, classification, or combined unknown constraint model
draw.GP(Zt, prior, l = 3, h = 4, thin = 10, Y = NULL) draw.CGP(Zt, prior, l = 3, h = 4, thin = 10) draw.ConstGP(Zt, prior, l = 3, h = 4, thin = 10)
draw.GP(Zt, prior, l = 3, h = 4, thin = 10, Y = NULL) draw.CGP(Zt, prior, l = 3, h = 4, thin = 10) draw.ConstGP(Zt, prior, l = 3, h = 4, thin = 10)
Zt |
the particle describing model parameters and sufficient statistics that determines the predictive distribution |
prior |
prior parameters passed from |
l |
positive uniform random walk parameter; for old parameter
|
h |
positive uniform random walk parameter; see above |
thin |
thinning level in the MCMC; describes the number of MH rounds executed before the value is saved as a sample from the (marginal) posterior distribution |
Y |
not for external use; used internally by CGP and ConstGP internal routines |
These functions are used in two important places in plgp.
At the user level, they can be used to initialize the particles
at time start
; see PL
and the demos.
Internally, they are used in the PL
propagate
step, e.g., propagate.GP
draw.ConstGP
is a combination
of the draw.GP
and draw.CGP
methods, which are
for regression and classification GPs, respectively
These functions return an updated particle Zt
Robert B. Gramacy, [email protected]
Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262
Gramacy, R. and Lee, H. (2010). “Optimization under unknown constraints”. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press
Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/
https://bobby.gramacy.com/r_packages/plgp/
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
Generates 2-d classification data with two or three class labels, based on the Hessian data from a 2-d real-valued response
exp2d.C(X, threed = TRUE)
exp2d.C(X, threed = TRUE)
X |
a |
threed |
a scalar |
The underlying real-valued response is governed by
Two class labels are generated by inspecting the sign of the sum of
the eigenvalues of the Hessian (Broderick & Gramacy, 2010). This
generates the first (-) and second (+) classes in a three-class
function. A third class label (the default) may
created from the first one where X[,1] > 0
(Gramacy & Polson, 2011)
A vector of class labels of length nrow(X)
is returned
Robert B. Gramacy, [email protected]
Broderick, T. and Gramacy, R. (2010). “Classification and categorical inputs with treed Gaussian process models.” Tech. rep., University of Cambridge. ArXiv:0904.4891.
Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262
Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/
https://bobby.gramacy.com/r_packages/plgp/
## The following demos use this data ## Not run: ## Illustrates classification GPs on a simple 2-d exponential ## data generating mechanism demo("plcgp_exp", ask=FALSE) ## Illustrates active learning via entropy with classification ## GPs on a simple 2-d exponential data generating mechanism demo("plcgp_exp_entropy", ask=FALSE) ## End(Not run)
## The following demos use this data ## Not run: ## Illustrates classification GPs on a simple 2-d exponential ## data generating mechanism demo("plcgp_exp", ask=FALSE) ## Illustrates active learning via entropy with classification ## GPs on a simple 2-d exponential data generating mechanism demo("plcgp_exp_entropy", ask=FALSE) ## End(Not run)
Functions for initializing particles for Gaussian process (GP) regression, classification, or combined unknown constraint models
init.GP(prior, d = NULL, g = NULL, Y = NULL) init.CGP(prior, d = NULL, g = NULL) init.ConstGP(prior)
init.GP(prior, d = NULL, g = NULL, Y = NULL) init.CGP(prior, d = NULL, g = NULL) init.ConstGP(prior)
prior |
prior parameters passed from |
d |
initial range (or length-scale) parameter(s) for the GP correlation function(s) |
g |
initial nugget parameter for the GP correlation |
Y |
data used to update GP sufficient information in the case of
|
Returns a particle for internal use in the PL
method
Robert B. Gramacy, [email protected]
Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262
Gramacy, R. and Lee, H. (2010). “Optimization under unknown constraints”. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press
Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/
https://bobby.gramacy.com/r_packages/plgp/
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
Log-predictive probability calculation for Gaussian process (GP) regression, classification, or combined unknown constraint models; primarily to be used particle learning (PL) re-sample step
lpredprob.GP(z, Zt, prior) lpredprob.CGP(z, Zt, prior) lpredprob.ConstGP(z, Zt, prior)
lpredprob.GP(z, Zt, prior) lpredprob.CGP(z, Zt, prior) lpredprob.ConstGP(z, Zt, prior)
z |
new observation whose (log) predictive probability is to be
calculated given the particle |
Zt |
the particle describing model parameters and sufficient statistics that determines the predictive distribution |
prior |
prior parameters passed from |
This is the workhorse of the PL
re-sample step. For
each new observation (in sequence), the
PL
function calls lpredprob
and these values
determine the weights used in the sample
function to
obtain the new particle set, which is then propagated, e.g., using
propagate.GP
The lpredprob.ConstGP
is essentially the combination
(product) of lpredprob.GP
and
lpredprob.CGP
for regression and classification GP
models, respectively
Returns a real-valued scalar - the log predictive probability
Robert B. Gramacy, [email protected]
Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262
Gramacy, R. and Lee, H. (2010). “Optimization under unknown constraints”. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press
Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/
https://bobby.gramacy.com/r_packages/plgp/
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
Applies a user-specified function to each particle contained in the
global variables peach
and pall
, collecting the
output in a data.frame
papply(fun, verb = 1, pre = "", ...)
papply(fun, verb = 1, pre = "", ...)
fun |
a user-defined function which which takes a particle as its first
input; the output of |
verb |
a scalar |
pre |
an optional |
... |
these ellipses arguments are used to pass extra optional
arguments to the user-supplied function |
This is a extension to the built-in apply
family of
function to particles, intended to be used with the particles created
by PL
. Perhaps the most common use of this function is
in obtaining samples form the posterior predictive distribution, i.e.,
with the user supplied fun = pred.GP
The particles applied over must be present in the global variables
pall
, containing sufficient information common to all
particles, peach
, containing sufficient information
particular to each particle, as constructed by PL
Returns a data frame with the collected output of the user-specified
function fun
Robert B. Gramacy, [email protected]
Carvalho, C., Johannes, M., Lopes, H., and Polson, N. (2008). “Particle Learning and Smoothing.” Discussion Paper 2008-32, Duke University Dept. of Statistical Science.
https://bobby.gramacy.com/r_packages/plgp/
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
Extract parameters from particles for Gaussian process (GP) regression, classification, or combined unknown constraint models
params.GP() params.CGP() params.ConstGP()
params.GP() params.CGP() params.ConstGP()
Collects the parameters from each of the particles (contained in
the global variable peach
) into a
data.frame
that can be used for quick
summary
and visualization, e.g., via
hist
. These functions are also called to make
progress
visualizations in PL
returns a data.frame
containing summaries for each
parameter in its columns
Robert B. Gramacy, [email protected]
Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262
Gramacy, R. and Lee, H. (2010). “Optimization under unknown constraints”. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press
Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/
https://bobby.gramacy.com/r_packages/plgp/
PL
, lpredprob.GP
,
propagate.GP
, init.GP
,
pred.GP
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
Implements the Particle Learning sequential Monte Carlo algorithm on the data sequence provided, using re-sample and propagate steps
PL(dstream, start, end, init, lpredprob, propagate, prior = NULL, addpall = NULL, params = NULL, save = NULL, P = 100, progress = 10, cont = FALSE, verb = 1)
PL(dstream, start, end, init, lpredprob, propagate, prior = NULL, addpall = NULL, params = NULL, save = NULL, P = 100, progress = 10, cont = FALSE, verb = 1)
dstream |
function generating the data stream; for examples see |
start |
a scalar |
end |
a scalar |
init |
function used to initialize the particles at the start of PL;
for examples see |
lpredprob |
function used to calculate the predictive probability of an
observation (usually the next one in “time”) given a
particle. This is the primary function used in the PL re-sample
step; for examples see |
propagate |
function used to propagate particles given an observation (usually
the next one in “time”); for examples see
|
prior |
function used to generate prior parameters that may be
passed into the |
addpall |
an optional function that adds the new observation (usually
the next one in “time”) to the |
params |
an optional function called each |
save |
an option function that is called every round to save some information about the particles |
P |
number of particles to use |
progress |
number of PL rounds after which to collect |
cont |
if |
verb |
if nonzero, then screen prints will indicate the proportion of PL
updates finished so far; |
Uses the PL SMC algorithm via the functions provided. This function is just a skeleton framework. The hard work is in specifying the arguments/functions which execute the calculations needed in the re-sample and propagate steps.
PL and uses the variables stored in the PL.env
environment:
pall
, containing
sufficient information common to all particles, peach
,
containing sufficient information particular to each of the P
particles, and psave
containing any saved information.
These variables may be accessed as PL.env$psave
, for example.
Note that PL is designed to be fast for sequential updating (of GPs) when new data arrive. This facilitates efficient sequential design of experiments by active learning techniques, e.g., optimization by expected improvement and sequential exploration of classification label boundaries by the predictive entropy. PL is not optimized for static inference when all of the data arrive at once, in batch
PL modifies the PL.env$peach
variable, containing sufficient
information particular to each (of the P
) particles
Robert B. Gramacy, [email protected]
Carvalho, C., Johannes, M., Lopes, H., and Polson, N. (2008). “Particle Learning and Smoothing.” Discussion Paper 2008-32, Duke University Dept. of Statistical Science.
Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262
Gramacy, R. and Lee, H. (2010). “Optimization under unknown constraints”. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press
Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/
https://bobby.gramacy.com/r_packages/plgp/
papply
, draw.GP
,
data.GP
, lpredprob.GP
,
propagate.GP
, params.GP
,
pred.GP
## See the demos via demo(package="plgp"); it is important to ## run them with the ask=FALSE argument so that the ## automatically generated plots may refresh automatically ## (without requiring the user to press RETURN) ## Not run: ## Illustrates regression GPs on a simple 1-d sinusoidal ## data generating mechanism demo("plgp_sin1d", ask=FALSE) ## Illustrates classification GPs on a simple 2-d exponential ## data generating mechanism demo("plcgp_exp", ask=FALSE) ## Illustrates classification GPs on Ripley's Cushings data demo("plcgp_cush", ask=FALSE) ## Illustrates active learning via the expected improvement ## statistic on a simple 1-d data generating mechanism demo("plgp_exp_ei", ask=FALSE) ## Illustrates active learning via entropy with classification ## GPs on a simple 2-d exponential data generating mechanism demo("plcgp_exp_entropy", ask=FALSE) ## Illustrates active learning via the integrated expected ## conditional improvement statistic for optimization ## under known constraints on a simple 1-d data generating ## mechanism demo("plgp_1d_ieci", ask=FALSE) ## Illustrates active learning via the integrated expected ## conditional improvement statistic for optimization under ## unknown constraints on a simple 1-d data generating ## mechanism demo("plconstgp_1d_ieci", ask=FALSE) ## Illustrates active learning via the integrated expected ## conditional improvement statistic for optimization under ## unknokn constraints on a simple 2-d data generating ## mechanism demo("plconstgp_2d_ieci", ask=FALSE) ## End(Not run)
## See the demos via demo(package="plgp"); it is important to ## run them with the ask=FALSE argument so that the ## automatically generated plots may refresh automatically ## (without requiring the user to press RETURN) ## Not run: ## Illustrates regression GPs on a simple 1-d sinusoidal ## data generating mechanism demo("plgp_sin1d", ask=FALSE) ## Illustrates classification GPs on a simple 2-d exponential ## data generating mechanism demo("plcgp_exp", ask=FALSE) ## Illustrates classification GPs on Ripley's Cushings data demo("plcgp_cush", ask=FALSE) ## Illustrates active learning via the expected improvement ## statistic on a simple 1-d data generating mechanism demo("plgp_exp_ei", ask=FALSE) ## Illustrates active learning via entropy with classification ## GPs on a simple 2-d exponential data generating mechanism demo("plcgp_exp_entropy", ask=FALSE) ## Illustrates active learning via the integrated expected ## conditional improvement statistic for optimization ## under known constraints on a simple 1-d data generating ## mechanism demo("plgp_1d_ieci", ask=FALSE) ## Illustrates active learning via the integrated expected ## conditional improvement statistic for optimization under ## unknown constraints on a simple 1-d data generating ## mechanism demo("plconstgp_1d_ieci", ask=FALSE) ## Illustrates active learning via the integrated expected ## conditional improvement statistic for optimization under ## unknokn constraints on a simple 2-d data generating ## mechanism demo("plconstgp_2d_ieci", ask=FALSE) ## End(Not run)
Prediction on a per-particle basis for Gaussian process (GP) regression, classification, or combined unknown constraint models
pred.GP(XX, Zt, prior, Y = NULL, quants = FALSE, Sigma = FALSE, sub = 1:Zt$t) pred.CGP(XX, Zt, prior, mcreps = 100, cs = NULL) pred.ConstGP(XX, Zt, prior, quants = TRUE)
pred.GP(XX, Zt, prior, Y = NULL, quants = FALSE, Sigma = FALSE, sub = 1:Zt$t) pred.CGP(XX, Zt, prior, mcreps = 100, cs = NULL) pred.ConstGP(XX, Zt, prior, quants = TRUE)
XX |
|
Zt |
the particle describing model parameters and sufficient statistics that determines the predictive distribution |
prior |
prior parameters passed from |
Y |
not for external use; used internally by CGP and ConstGP internal routines |
quants |
a scalar |
Sigma |
a scalar |
sub |
not for external used; used internally by CGP and ConstGP internal routines |
mcreps |
number of Monte Carlo iterations used in CGP prediction, integrating
over the latent real-valued |
cs |
indicates a class label at which the predictive probability is desired; the entire probability distribution over all class labels will be provided if not specified |
For pred.GP
the predictive mean (and quantiles if quants
= TRUE
is provided. For pred.CGP
the predictive
distribution over the class labels is provided, unless only one
class (cs
) is desired. pred.ConstGP
is a combination
of the pred.GP
and pred.CGP
methods
It is suggested that this function is used in as an argument to
papply
to obtain many predictions - one for each
particle in a cloud - which are combined into a
data.frame
Some of the function arguments aren't meant to
be specified by the user, but are rather there to facilitate usage as a
subroutine inside other PL
functions, such as
lpredprob.GP
and others
A single-row data.frame
is returned with the desired
predictive; these rows are automatically combined when used with
papply
Robert B. Gramacy, [email protected]
Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262
Gramacy, R. and Lee, H. (2010). “Optimization under unknown constraints”. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press
Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/
https://bobby.gramacy.com/r_packages/plgp/
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
Generate priors for Gaussian process (GP) regression, classification, or combined unknown constraint models
prior.GP(m, cov = c("isotropic", "separable", "sim")) prior.CGP(m, cov = c("isotropic", "separable", "sim")) prior.ConstGP(m, cov.GP = c("isotropic", "separable", "sim"), cov.CGP = cov.GP)
prior.GP(m, cov = c("isotropic", "separable", "sim")) prior.CGP(m, cov = c("isotropic", "separable", "sim")) prior.ConstGP(m, cov.GP = c("isotropic", "separable", "sim"), cov.CGP = cov.GP)
m |
positive scalar integer specifying the dimensionality of the input space |
cov |
whether to use an |
cov.GP |
specifies the covariance for the real-valued response in the combined unknown constraint GP model |
cov.CGP |
specifies the covariance for the categorical response in the combined unknown constraint GP model |
These function generate a default prior object in the correct format
for use with the other PL routines, e.g.,
init.GP
and pred.GP
. The object returned
may be modified as necessary.
The prior.ConstGP
is essentially the combination
of prior.GP
and prior.CGP
for regression and classification GP models, respectively
a valid prior object for the appropriate GP model;
By making the output $drate
and/or $grate
values negative causes the corresponding lengthscale d
parameter(s) and nugget d
parameter to be fixed at the
reciprocal of their absolute values, respectively. This effectively
turns off inference for these values, and allows one to study the GP
predictive distribution as a function of fixed values. When both
are fixed it is sensible to use only one particle (P=1
, as an
argument to PL
)
Robert B. Gramacy, [email protected]
Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262
Gramacy, R. and Lee, H. (2010). “Optimization under unknown constraints”. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press
Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/
https://bobby.gramacy.com/r_packages/plgp/
PL
, lpredprob.GP
,
propagate.GP
, init.GP
,
pred.GP
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
Incorporation of a new data point for Gaussian process (GP) regression, classification, or combined unknown constraint models; primarily to be used particle learning (PL) propagate step
propagate.GP(z, Zt, prior) propagate.CGP(z, Zt, prior) propagate.ConstGP(z, Zt, prior)
propagate.GP(z, Zt, prior) propagate.CGP(z, Zt, prior) propagate.ConstGP(z, Zt, prior)
z |
new observation whose to be incorporate into the
particle |
Zt |
the particle describing model parameters and sufficient statistics that the new data is being incorporated into |
prior |
prior parameters passed from |
This is the workhorse of the PL
propagate step.
After re-sampling the particles, PL
calls
propagate
on each of the particles to obtain the set used in
the next round/time-step
The propagate.ConstGP
is essentially the combination
of propagate.GP
and propagate.CGP
for regression and classification GP models, respectively
These functions return a new particle with the new observation incorporated
Robert B. Gramacy, [email protected]
Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262
Gramacy, R. and Lee, H. (2010). “Optimization under unknown constraints”. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press
Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/
https://bobby.gramacy.com/r_packages/plgp/
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
## See the demos via demo(package="plgp") and the examples ## section of ?plgp
Scale data lying in an arbitrary rectangle to lie in the unit rectangle, and back again
rectscale(X, rect) rectunscale(X, rect)
rectscale(X, rect) rectunscale(X, rect)
X |
a |
rect |
a |
a matrix
or data.frame
with the same dimensions as
X
scaled or un-scaled as appropriate
Robert B. Gramacy, [email protected]
https://bobby.gramacy.com/r_packages/plgp/
X <- matrix(runif(10, 1, 3), ncol=2) rect <- rbind(c(1,3), c(1,3)) Xs <- rectscale(X, rect) rectunscale(Xs, rect)
X <- matrix(runif(10, 1, 3), ncol=2) rect <- rbind(c(1,3), c(1,3)) Xs <- rectscale(X, rect) rectunscale(Xs, rect)