rc.gpr.models§
Contains the MOGP class implementing Gaussian Process Regression.
Classes§
|
|
Interface to a Gaussian Process. |
|
Implementation of a Gaussian Process. |
Module Contents§
- class Likelihood(parent, read_data=False, **kwargs)§
Bases:
rc.base.models.DataBase
NamedTables(NamedTuple)
in a folder alongsideMeta
. Abstract base class for any model.DataBase
subclasses must be implemented according to the template (copy and paste it):class MyDataBase(DataBase): class NT(NamedTuple): names[i]: Table | Matrix | MetaData = pd.DataFrame(defaults[names[i]].pd) #: Comment ... def __call__(self, name: str) -> Table | Matrix | MetaData: """ Returns the Table named ``name``.""" return getattr(self, name) options: NamedTables[MetaData] = NamedTables(**{name: table.options for name, table in {}.items()}) """ Class attribute of the form ``NamedTables(**{names[i]: options[i], ...})``. Override as necessary for bespoke ``Table.options``. Elements of ``options[i]`` found in ``Table.writeOptions`` populate ``self[i].options.write``, the remainder populate ``self[i].options.read``.""" defaultMetaData: MetaData = {'Tables': Tables.options._asdict()}
- Parameters:
parent (GPR)
read_data (bool)
kwargs (NP.Matrix)
- class Data§
Bases:
rc.base.models.Tables
The Data set of a MOGP.
- property NamedTuple: rc.base.definitions.Type[rc.base.definitions.NamedTuple]§
- Classmethod:
- Return type:
rc.base.definitions.Type[rc.base.definitions.NamedTuple]
The NamedTuple underpinning this Data set.
- calibrate(**kwargs)§
Merely sets the trainable data.
- Return type:
rc.base.definitions.Dict[str, rc.base.definitions.Any]
- class GPR(name, fold, is_read, is_covariant, is_isotropic, kernel_parameters=None, likelihood_variance=None)§
Bases:
rc.base.models.DataBase
Interface to a Gaussian Process.
- Parameters:
name (str)
fold (rc.data.models.Fold)
is_read (bool | None)
is_covariant (bool)
is_isotropic (bool)
kernel_parameters (Data | None)
likelihood_variance (NP.Matrix | None)
- class Data§
Bases:
rc.base.models.Tables
The Data set of a MOGP.
- property NamedTuple: rc.base.definitions.Type[rc.base.definitions.NamedTuple]§
- Classmethod:
- Return type:
rc.base.definitions.Type[rc.base.definitions.NamedTuple]
The NamedTuple underpinning this Data set.
- property META: rc.base.definitions.Dict[str, rc.base.definitions.Any]§
- Classmethod:
- Abstractmethod:
- Return type:
rc.base.definitions.Dict[str, rc.base.definitions.Any]
Hyper-parameter optimizer meta
- property KERNEL_FOLDER_NAME: str§
- Classmethod:
- Return type:
str
The name of the folder where kernel data are stored.
- property fold: rc.data.models.Fold§
The parent fold.
- Return type:
rc.data.models.Fold
- property implementation: rc.base.definitions.Tuple[rc.base.definitions.Any, Ellipsis]§
- Abstractmethod:
- Return type:
rc.base.definitions.Tuple[rc.base.definitions.Any, Ellipsis]
The implementation of this MOGP in GPFlow. If
noise_variance.shape == (1,L)
an L-tuple of kernels is returned. Ifnoise_variance.shape == (L,L)
a 1-tuple of multi-output kernels is returned.
- property L: int§
The output (Y) dimensionality.
- Return type:
int
- property M: int§
The input (X) dimensionality.
- Return type:
int
- property N: int§
The the number of training samples.
- Return type:
int
- property X: rc.base.definitions.Any§
- Abstractmethod:
- Return type:
rc.base.definitions.Any
The implementation training inputs.
- property Y: rc.base.definitions.Any§
- Abstractmethod:
- Return type:
rc.base.definitions.Any
The implementation training outputs.
- property K_cho: rc.base.definitions.Union[NP.Matrix, TF.Tensor]§
- Abstractmethod:
- Return type:
rc.base.definitions.Union[NP.Matrix, TF.Tensor]
The Cholesky decomposition of the LNxLN noisy kernel(X, X) + likelihood.variance. Shape is (LN, LN) if self.kernel.is_covariant, else (L,N,N).
- property K_inv_Y: rc.base.definitions.Union[NP.Matrix, TF.Tensor]§
- Abstractmethod:
- Return type:
rc.base.definitions.Union[NP.Matrix, TF.Tensor]
The LN-Vector, which pre-multiplied by the LoxLN kernel k(x, X) gives the Lo-Vector predictive mean f(x). Shape is (L,1,N). Returns: ChoSolve(self.K_cho, self.Y)
- abstractmethod predict(x, y_instead_of_f=True)§
Predicts the response to input X.
- Parameters:
x (NP.Matrix) – An (o, M) design Matrix of inputs.
y_instead_of_f (bool) – True to include noise in the variance of the result.
- Return type:
rc.base.definitions.Tuple[NP.Matrix, NP.Matrix]
Returns: The distribution of y or f, as a pair (mean (o, L) Matrix, std (o, L) Matrix).
- predict_df(x, y_instead_of_f=True, is_normalized=True)§
Predicts the response to input X.
- Parameters:
x (NP.Matrix) – An (o, M) design Matrix of inputs.
y_instead_of_f (bool) – True to include noise in the variance of the result.
is_normalized (bool) – Whether the results are normalized or not.
- Return type:
rc.base.definitions.pd.DataFrame
Returns: The distribution of y or f, as a dataframe with M+L+L columns of the form (X, Mean, Predictive Std).
- abstractmethod predict_gradient(x, y_instead_of_f=True)§
Predicts the gradient GP dy/dx (or df/dx) where
self
is the GP for y(x).- Parameters:
x (NP.Matrix) – An (o, M) design Matrix of inputs.
y_instead_of_f (bool) – True to include noise in the variance of the result.
- Return type:
rc.base.definitions.Tuple[TF.Tensor, TF.Tensor]
- Returns: The distribution of dy/dx or df/dx, as a pair (mean (o, L, M), cov (o, L, M, O, l, m)) if
self.likelihood.is_covariant
, else (mean (o, L, M), cov (o, O, L, M)).
- test()§
Tests the MOGP on the test data in self._fold.test_data. Test results comprise three values for each output at each sample: The mean prediction, the std error of prediction and the Z score of prediction (i.e. error of prediction scaled by std error of prediction).
Returns: The test_data results as a DataTable backed by MOGP.test_result_csv.
- Return type:
rc.data.models.Table
- broadcast_parameters(is_covariant, is_isotropic)§
Broadcast the data of the MOGP (including kernels) to higher dimensions. Shrinkage raises errors, unchanged dimensions silently do nothing.
- Parameters:
is_covariant (bool) – Whether the outputs will be treated as dependent.
is_isotropic (bool) – Whether to restrict the kernel to be isotropic.
- Return type:
Returns:
self
, for chaining calls.
- class MOGP(name, fold, is_read, is_covariant, is_isotropic, kernel_parameters=None, likelihood_variance=None)§
Bases:
GPR
Implementation of a Gaussian Process.
- Parameters:
name (str)
fold (rc.data.models.Fold)
is_read (bool | None)
is_covariant (bool)
is_isotropic (bool)
kernel_parameters (Data | None)
likelihood_variance (NP.Matrix | None)
- property META: rc.base.definitions.Dict[str, rc.base.definitions.Any]§
- Classmethod:
- Return type:
rc.base.definitions.Dict[str, rc.base.definitions.Any]
Hyper-parameter optimizer meta
- property implementation: rc.base.definitions.Tuple[rc.base.definitions.Any, Ellipsis]§
The implementation of this MOGP in GPFlow. If
noise_variance.shape == (1,L)
an L-tuple of kernels is returned. Ifnoise_variance.shape == (L,L)
a 1-tuple of multi-output kernels is returned.- Return type:
rc.base.definitions.Tuple[rc.base.definitions.Any, Ellipsis]
- calibrate(method='L-BFGS-B', **kwargs)§
Optimize the MOGP hyper-data.
- Parameters:
method (str) – The optimization algorithm (see https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html).
kwargs – A Dict of implementation-dependent optimizer meta, following the format of GPR.META. MetaData for the kernel should be passed as kernel={see kernel.META for format}. MetaData for the likelihood should be passed as likelihood={see likelihood.META for format}.
- Return type:
rc.base.definitions.Dict[str, rc.base.definitions.Any]
- predict(X, y_instead_of_f=True)§
Predicts the response to input X.
- Parameters:
x – An (o, M) design Matrix of inputs.
y_instead_of_f (bool) – True to include noise in the variance of the result.
X (NP.Matrix)
- Return type:
rc.base.definitions.Tuple[NP.Matrix, NP.Matrix]
Returns: The distribution of y or f, as a pair (mean (o, L) Matrix, std (o, L) Matrix).
- predict_gradient(x, y_instead_of_f=True)§
Predicts the gradient GP dy/dx (or df/dx) where
self
is the GP for y(x).- Parameters:
x (NP.Matrix) – An (o, M) design Matrix of inputs.
y_instead_of_f (bool) – True to include noise in the variance of the result.
- Return type:
rc.base.definitions.Tuple[TF.Tensor, TF.Tensor]
- Returns: The distribution of dy/dx or df/dx, as a pair (mean (o, L, M), cov (o, L, M, O, l, m)) if
self.likelihood.is_covariant
, else (mean (o, L, M), cov (o, O, L, M)).
- property X: TF.Matrix§
The implementation training inputs as an (N,M) design matrix.
- Return type:
TF.Matrix
- property Y: TF.Matrix§
The implementation training outputs as an (N,L) design matrix.
- Return type:
TF.Matrix
- property K_cho: TF.Tensor§
The Cholesky decomposition of the LNxLN noisy kernel(X, X) + likelihood.variance. Shape is (LN, LN) if self.kernel.is_covariant, else (L,N,N).
- Return type:
TF.Tensor
- property K_inv_Y: TF.Tensor§
The LN-Vector, which pre-multiplied by the LoxLN kernel k(x, X) gives the Lo-Vector predictive mean f(x). Shape is (L,1,N). Returns: ChoSolve(self.K_cho, self.Y)
- Return type:
TF.Tensor
- check_K_inv_Y(x)§
FOR TESTING PURPOSES ONLY. Should return 0 Vector (to within numerical error tolerance).
- Parameters:
x (NP.Matrix) – An (o, M) matrix of inputs.
- Return type:
NP.Matrix
Returns: Should return zeros((Lo)) (to within numerical error tolerance).