Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update gpytorch to 1.13 #33

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

pyup-bot
Copy link
Collaborator

@pyup-bot pyup-bot commented Sep 6, 2024

This PR updates gpytorch from 1.0.1 to 1.13.

Changelog

1.13

What's Changed
* `main` and `develop` branches by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2542
* Include jaxtyping to allow for Tensor/LinearOperator typehints with sizes. by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2543
* fix: replace deprecated scipy.integrate.cumtrapz with cumulative_trapezoid by natsukium in https://github.com/cornellius-gp/gpytorch/pull/2545
* use common notation for normal distribution N(\mu, \sigma^2) by partev in https://github.com/cornellius-gp/gpytorch/pull/2547
* fix broken link in Simple_GP_Regression.ipynb by partev in https://github.com/cornellius-gp/gpytorch/pull/2546
* Deprecate last_dim_is_batch (bump PyTorch version to >= 2.0) by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2549
* Added ability for priors of transformed distributions to have their p… by hvarfner in https://github.com/cornellius-gp/gpytorch/pull/2551
* Avoid unnecessary memory allocation for covariance downdate in SGPR prediction strategy by JonathanWenger in https://github.com/cornellius-gp/gpytorch/pull/2559
* Fix VNNGP with batches by LuhuanWu in https://github.com/cornellius-gp/gpytorch/pull/2375
* fix a typo by partev in https://github.com/cornellius-gp/gpytorch/pull/2570
* fix a typo by partev in https://github.com/cornellius-gp/gpytorch/pull/2571
* DOC: improve the formatting in the documentation by partev in https://github.com/cornellius-gp/gpytorch/pull/2578

New Contributors
* natsukium made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2545
* hvarfner made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2551

**Full Changelog**: https://github.com/cornellius-gp/gpytorch/compare/v1.12...v1.13

1.12

What's Changed
* Minor patch to Matern covariances by j-wilson in https://github.com/cornellius-gp/gpytorch/pull/2378
* Fix error messages for ApproximateGP.get_fantasy_model by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2374
* Fix lazy kernel slicing when there are multiple outputs by douglas-boubert in https://github.com/cornellius-gp/gpytorch/pull/2376
* Fix training status of noise model of `HeteroskedasticNoise` after exceptions by fjzzq2002 in https://github.com/cornellius-gp/gpytorch/pull/2382
* Stop rbf_kernel_grad and rbf_kernel_gradgrad creating the full covariance matrix unnecessarily by douglas-boubert in https://github.com/cornellius-gp/gpytorch/pull/2388
* Likelihood bugfix by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2395
* Update RTD configuration, and linear_operator requirement. by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2399
* Better support for missing labels by Turakar in https://github.com/cornellius-gp/gpytorch/pull/2288
* Fix latex of gradients in docs by jlperla in https://github.com/cornellius-gp/gpytorch/pull/2404
* Skip the warning in `gpytorch.lazy.__getattr__` if name starts with `_` by saitcakmak in https://github.com/cornellius-gp/gpytorch/pull/2423
* Fix KeOps regressions from 2296. by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2413
* Update index.rst by mkomod in https://github.com/cornellius-gp/gpytorch/pull/2449
* `python` should also be a runtime dependency by jaimergp in https://github.com/cornellius-gp/gpytorch/pull/2457
* fix a typo: cannonical -> canonical by partev in https://github.com/cornellius-gp/gpytorch/pull/2461
* Update distributions.rst by chrisyeh96 in https://github.com/cornellius-gp/gpytorch/pull/2487
* Fix flaky SVGP classification test by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2495
* DOC: Fix typo in docstring. by johanneskopton in https://github.com/cornellius-gp/gpytorch/pull/2493
* fix a typo by partev in https://github.com/cornellius-gp/gpytorch/pull/2464
* DOC: fix formatting issue in RFFKernel documentation by partev in https://github.com/cornellius-gp/gpytorch/pull/2463
* DOC: fix broken formatting in leave_one_out_pseudo_likelihood.py by partev in https://github.com/cornellius-gp/gpytorch/pull/2462
* `ConstantKernel` by SebastianAment in https://github.com/cornellius-gp/gpytorch/pull/2511
* DOC: fix broken URL in periodic_kernel.py by partev in https://github.com/cornellius-gp/gpytorch/pull/2513
* Bug: Exploit Structure in get_fantasy_strategy by naefjo in https://github.com/cornellius-gp/gpytorch/pull/2494
* Matern52 grad by m-julian in https://github.com/cornellius-gp/gpytorch/pull/2512
* Added optional `kwargs` to `ExactMarginalLogLikelihood` call by rafaol in https://github.com/cornellius-gp/gpytorch/pull/2522
* Corrected configuration of ``exclude`` statements in ``pre-commit`` configuration by JonathanWenger in https://github.com/cornellius-gp/gpytorch/pull/2541

New Contributors
* douglas-boubert made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2376
* fjzzq2002 made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2382
* jlperla made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2404
* mkomod made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2449
* jaimergp made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2457
* partev made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2461
* chrisyeh96 made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2487
* johanneskopton made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2493
* naefjo made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2494
* rafaol made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2522

**Full Changelog**: https://github.com/cornellius-gp/gpytorch/compare/v1.11...v1.12

1.11

What's Changed
* Fix solve_triangular(Tensor, LinearOperator) not supported in VNNGP by Turakar in https://github.com/cornellius-gp/gpytorch/pull/2323
* Metrics fixes and cleanup by JonathanWenger in https://github.com/cornellius-gp/gpytorch/pull/2325
* Lock down doc requirements to prevent RTD failures. by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2339
* Fix typos in multivariate_normal.py by manuelhaussmann in https://github.com/cornellius-gp/gpytorch/pull/2331
* add Hamming IMQ kernel by samuelstanton in https://github.com/cornellius-gp/gpytorch/pull/2327
* Use torch.cdist for `dist` by esantorella in https://github.com/cornellius-gp/gpytorch/pull/2336
* Enable fantasy models for multitask GPs Reborn by yyexela in https://github.com/cornellius-gp/gpytorch/pull/2317
* Clean up deprecation warnings by saitcakmak in https://github.com/cornellius-gp/gpytorch/pull/2348
* More informative string representation of MultitaskMultivariateNormal distributions. by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2333
* Mean and kernel functions for first and second derivatives by ankushaggarwal in https://github.com/cornellius-gp/gpytorch/pull/2235
* Bugfix: double added log noise prior by LuisAugenstein in https://github.com/cornellius-gp/gpytorch/pull/2355
* Remove Module.__getattr__ by saitcakmak in https://github.com/cornellius-gp/gpytorch/pull/2359
* Remove num_outputs from IndependentModelList by saitcakmak in https://github.com/cornellius-gp/gpytorch/pull/2360
* keops periodic and keops kernels unit tests by m-julian in https://github.com/cornellius-gp/gpytorch/pull/2296
* Deprecate checkpointing by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2361

New Contributors
* Turakar made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2323
* manuelhaussmann made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2331
* esantorella made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2336
* yyexela made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2317
* ankushaggarwal made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2235
* LuisAugenstein made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2355

**Full Changelog**: https://github.com/cornellius-gp/gpytorch/compare/v1.10...v1.11

1.10

What's Changed
* Re-add pyro + torch_master check by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2241
* Fix silently ignored arguments in IndependentModelList by saitcakmak in https://github.com/cornellius-gp/gpytorch/pull/2249
* fix bug in nearest_neighbor_variational_strategy by LuhuanWu in https://github.com/cornellius-gp/gpytorch/pull/2243
* Move  infinite interval bounds check into Interval constructor by Balandat in https://github.com/cornellius-gp/gpytorch/pull/2259
* Use ufmt for code formatting and import sorting by Balandat in https://github.com/cornellius-gp/gpytorch/pull/2262
* Update nearest_neighbors.py by yw5aj in https://github.com/cornellius-gp/gpytorch/pull/2267
* Use raw strings to avoid "DeprecationWarning: invalid escape sequence" by saitcakmak in https://github.com/cornellius-gp/gpytorch/pull/2282
* Fix handling of re-used priors by Balandat in https://github.com/cornellius-gp/gpytorch/pull/2269
* Fix BernoulliLikelihood documentation by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2285
* gpytorch.settings.variational_cholesky_jitter can be set dynamically. by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2255
* Likelihood docs update by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2292
* Improve development/contributing documentation by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2293
* Use raw strings to avoid "DeprecationWarning: invalid escape sequence" by saitcakmak in https://github.com/cornellius-gp/gpytorch/pull/2295
* Update SGPR notebook by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2303
* Update linear operator dependency to 0.4.0 by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2321

New Contributors
* yw5aj made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2267

**Full Changelog**: https://github.com/cornellius-gp/gpytorch/compare/v1.9.1...v1.10

1.9.1

What's Changed
* Fix LMCVariationalStrategy example in docs by adamjstewart in https://github.com/cornellius-gp/gpytorch/pull/2112
* Accept closure argument in NGD optimizer `step` by dannyfriar in https://github.com/cornellius-gp/gpytorch/pull/2118
* Fix bug with Multitask DeepGP predictive variances. by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2123
* Autogenerate parameter types in documentation from python typehints by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2125
* Retiring deprecated versions of`psd_safe_cholesky`, `NotPSDError`, and `assert_allclose` by SebastianAment in https://github.com/cornellius-gp/gpytorch/pull/2130
* fix custom dtype_value_context setting by sdaulton in https://github.com/cornellius-gp/gpytorch/pull/2132
* Include linear operator in installation instructions by saitcakmak in https://github.com/cornellius-gp/gpytorch/pull/2131
* Fixes HalfCauchyPrior by feynmanliang in https://github.com/cornellius-gp/gpytorch/pull/2137
* Fix return type of `Kernel.covar_dist` by Balandat in https://github.com/cornellius-gp/gpytorch/pull/2138
* Change variable name for better understanding by findoctorlin in https://github.com/cornellius-gp/gpytorch/pull/2135
* Expose jitter by hughsalimbeni in https://github.com/cornellius-gp/gpytorch/pull/2136
* Add HalfNormal prior distribution for non-negative variables. by ZitongZhou in https://github.com/cornellius-gp/gpytorch/pull/2147
* Fix multitask/added_loss_term bugs in SGPR regression by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2121
* fix bugs in test half Cauchy prior. by ZitongZhou in https://github.com/cornellius-gp/gpytorch/pull/2156
* Generalize RandomModule by feynmanliang in https://github.com/cornellius-gp/gpytorch/pull/2164
* MMVN.to_data_independent_dist returns correct variance for non-interleaved MMVN distributions. by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2172
* Update MSLL in metrics.py by jongwonKim-1997 in https://github.com/cornellius-gp/gpytorch/pull/2177
* Update multitask example notebook by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2190
* Fix exception message for missing kernel lazy kernel attribute by dannyfriar in https://github.com/cornellius-gp/gpytorch/pull/2195
* Improving `_sq_dist` when `x1_eq_x2` by SebastianAment in https://github.com/cornellius-gp/gpytorch/pull/2204
* Fix docs/requirements.txt by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2206
* As per issue '2175 [Docs] GP Regression With Uncertain Inputs'. by corwinjoy in https://github.com/cornellius-gp/gpytorch/pull/2200
* Avoid evaluating kernel when adding jitter by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2189
* Avoid evaluating kernel in `expand_batch` by dannyfriar in https://github.com/cornellius-gp/gpytorch/pull/2185
* Deprecating `postprocess` by SebastianAment in https://github.com/cornellius-gp/gpytorch/pull/2205
* Make PiecewisePolynomialKernel GPU compatible by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2217
* Let `LazyEvaluatedKernelTensor` recall the grad state at instantiation by SebastianAment in https://github.com/cornellius-gp/gpytorch/pull/2229
* Doc Update for Posterior Model Distribution and Posterior Predictive Distribution by varunagrawal in https://github.com/cornellius-gp/gpytorch/pull/2230
* Fix 08_Advanced_Usage links by st-- in https://github.com/cornellius-gp/gpytorch/pull/2240
* Add `device` property to `Kernel`s, add unit tests by Balandat in https://github.com/cornellius-gp/gpytorch/pull/2234
* pass **kwargs to ApproximateGP.__call__ in DeepGPLayer by IdanAchituve in https://github.com/cornellius-gp/gpytorch/pull/2224

New Contributors
* dannyfriar made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2118
* SebastianAment made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2130
* feynmanliang made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2137
* findoctorlin made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2135
* hughsalimbeni made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2136
* ZitongZhou made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2147
* jongwonKim-1997 made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2177
* corwinjoy made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2200
* varunagrawal made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2230
* st-- made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2240
* IdanAchituve made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2224

**Full Changelog**: https://github.com/cornellius-gp/gpytorch/compare/v1.9.0...v1.9.1

1.9.0

Starting with this release, the `LazyTensor` functionality of GPyTorch has been pulled out into its own separate Python package, called [linear_operator](https://github.com/cornellius-gp/linear_operator). Most users won't notice the difference (at the moment), but power users will notice a few changes.

If you have your own custom LazyTensor code, don't worry: this release is backwards compatible! However, you'll see a lot of annoying deprecation warnings 😄 

LazyTensor -> LinearOperator
- All `gpytorch.lazy.*LazyTensor` classes now live in the `linear_operator` repo, and are now called `linear_operator.operator.*LinearOperator`.
- For example, `gpytorch.lazy.DiagLazyTensor` is now `linear_operator.operators.DiagLinearOperator`
- The only major naming change: `NonLazyTensor` is now `DenseLinearOperator`
- `gpytorch.lazify` and `gpytorch.delazify` are now `linear_operator.to_linear_operator` and `linear_operator.to_dense`, respectively.
- The `_quad_form_derivative` method has been renamed to `_bilinear_derivative` (a more accurate name!)
- `LinearOperator` method names now reflect their corresponding PyTorch names. This includes:
- `add_diag` -> `add_diagonal` 
- `diag` -> `diagonal`
- `inv_matmul` -> `solve`
- `symeig` -> `eigh` and `eigvalsh`
- `LinearOperator` now has the `mT` property

__torch_function__ functionality

LinearOperators are now compatible with the torch api! For example, the following code works:

python
diag_linear_op = linear_operator.operators.DiagLinearOperator(torch.randn(10))
torch.matmul(diag_linear_op, torch.randn(10, 2))   returns a torch.Tensor!


Other files that have moved:

- `gpytorch.functions` - all of the core functions used by LazyTensors now live in the LinearOperator repo. This includes: diagonalization, dsmm, inv_quad, inv_quad_logdet, matmul, pivoted_cholesky, root_decomposition, solve (formally inv_matmul), and sqrt_inv_matmul
- `gpytorch.utils` - a few have moved to the LinearOperator repo. This includes: broadcasting, cholesky, contour_intergral_quad, getitem, interpolation, lanczos, linear_cg, minres, permutation, stable_pinverse, qr, sparse, SothcasticLQ, and toeplitz.

**Full Changelog**: https://github.com/cornellius-gp/gpytorch/compare/v1.8.1...v1.9.0

1.8.1

Bug fixes

* MultitaskMultivariateNormal: fix tensor reshape issue by adamjstewart in https://github.com/cornellius-gp/gpytorch/pull/2081
* Fix handling of prior terms in ExactMarginalLogLikelihood by saitcakmak in https://github.com/cornellius-gp/gpytorch/pull/2039
* Fix bug in preconditioned KISS-GP / Hadamard Multitask GPs by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2090
* Add constant_constraint to ConstantMean by gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2082

New Contributors
* mone27 made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2076

**Full Changelog**: https://github.com/cornellius-gp/gpytorch/compare/v1.8.0...v1.8.1

1.8.0

Major Features
* add variational nearest neighbor GP by LuhuanWu in https://github.com/cornellius-gp/gpytorch/pull/2026

New Contributors
* adamjstewart made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2061
* m-julian made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2054
* ngam made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2059
* LuhuanWu made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2026

**Full Changelog**: https://github.com/cornellius-gp/gpytorch/compare/v1.7.0...v1.8.0

1.7.0

**Important**: This release requires Python 3.7 (up from 3.6) and PyTorch 1.10 (up from 1.9)

New Features
- gpytorch.metrics module offers easy-to-use metrics for GP performance.(1870) This includes:
 - gpytorch.metrics.mean_absolute_error
 - gpytorch.metrics.mean_squared_error
 - gpytorch.metrics.mean_standardized_log_loss
 - gpytorch.metrics.negative_log_predictive_density
 - gpytorch.metrics.quantile_coverage_error
- Large scale inference (using matrix-multiplication techniques) now implements the variance reduction scheme described in [Wenger et al., ICML 2022](https://arxiv.org/abs/2107.00243). (#1836)
 - This makes it possible to use LBFGS, or other line search based optimization techniques, with large scale (exact) GP hyperparameter optimization.
- Variational GP models support online updates (i.e. “fantasizing new models). (1874)
 - This utilizes the method described in [Maddox et al., NeurIPS 2021](https://papers.nips.cc/paper/2021/hash/325eaeac5bef34937cfdc1bd73034d17-Abstract.html)
- Improvements to gpytorch.priors
 - New HalfCauchyPrior (1961)
 - LKJPrior now supports sampling (1737)

Minor Features
- Add LeaveOneOutPseudoLikelihood for hyperparameter optimization (1989)
- The PeriodicKernel now supports ARD lengthscales/periods (1919)
- LazyTensors (A) can now be matrix multiplied with tensors (B) from the left hand side (i.e. B x A) (1932)
- Maximum Cholesky retries can be controlled through a setting (1861)
- Kernels, means, and likelihoods can be pickled (1876)
- Minimum variance for FixedNoiseGaussianLikelihood can be set with a context manager (2009) 

Bug Fixes
- Fix backpropagation issues with KeOps kernels (1904)
- Fix broadcasting issues with lazily evaluated kernels (1971)
- Fix batching issues with PolynomialKernel (1977)
- Fix issues with PeriodicKernel.diag() (1919)
- Add more informative error message when train targets and the train prior distribution mismatch (1905)
- Fix issues with priors on ConstantMean (2042)

1.6.0

This release contains several bug fixes and performance improvements.

New Features
- Variational multitask models can output a single task per input (rather than all tasks per input) (1769)

Small fixes
- LazyTensorto method more closely matches the torch Tensor API (1746)
- Add type hints and exceptions to kernels to improve usability (1802)

Performance
- Improve the speed of fantasy models (1752)
- Improve the speed of solves and log determinants with KroneckerProductLazyTensor (1786)
- Prevent explicit kernel evaluation when expanding a LazyTensor kernel (1813)

Fixes
- Fix indexing bugs with kernels (1802, 1819, 1828)
- Fix cholesky bugs on CUDA (1848)
- Remove lines of code that generate warnings in PyTorch 1.9 (1835)

1.5.1

New features

- Add `gpytorch.kernels.PiecewisePolynomialKernel` (1738)
- Include ability to turn off diagonal correction for SGPR models (1717)
- Include ability to cast LazyTensor to half and float types (1726)


Performance improvements

- Specialty MVN log_prob method for Gaussians with sum-of-Kronecker covariances (1674)
- Ability to specify devices when concatenating rows of LazyTensors (1712)
- Improvements to LazyTensor symeig method (1725)


Bug fixes

- Fix to computing batch sizes of kernels (1685)
- Fix SGPR prediction when `fast_computations` flags are turned off (1709)
- Improve stability of `stable_qr` function (1714)
- Fix bugs with pyro integration for full Bayesian inference (1721)
- `num_classes` in `gpytorch.likelihoods.DirichletLikelihood` should be an integer (1728)

1.5.0

This release adds 2 new model classes, as well as a number of bug fixes:
- GPLVM models for unsupervised learning
- Polya-Gamma GPs for GP classification
In addition, this release contains numerous improvements to SGPR models (that have also been included in prior bug-fix releases).

New features
- Add example notebook that demos binary classification with Polya-Gamma augmentation (1523)
- New model class: Bayesian GPLVM with Stochastic Variational Inference  (1605)
- Periodic kernel handles multi-dimensional inputs (1593)
- Add missing data gaussian likelihoods (1668)

Performance
- Speed up SGPR models (1517, 1528, 1670)

Fixes
- Fix erroneous loss for ExactGP multitask models (1647)
- Fix pyro sampling (1594)
- Fix initialize bug for additive kernels (1635)
- Fix matrix multiplication of rectangular ZeroLazyTensor (1295)
- Dirichlet GPs use true train targets not labels (1641)

1.4.2

Various bug fixes, including

- Use current PyTorch functionality (1611, 1586)
- Bug fixes to Lanczos factorization (1607)
- Fixes to SGPR model (1607)
- Various fixes to LazyTensor math (1576, 1584)
- SmoothedBoxPrior has a sample method (1546)
- Fixes to additive-structure models (1582)
- Doc fixes {1603)
- Fix to index kernel and LCM kernels (1608, 1592)
- Fixes to KeOps bypass (1609)

1.4.1

Fixes
- Simplify interface for 3+ layer DSPP models (1565)
- Fix marginal log likelihood calculation for exact Bayesian inference w/ Pyro (1571)
- Remove CG warning for small matrices (1562)
- Fix Pyro cluster-multitask example notebook (1550)
- Fix gradients for KeOps tensors (1543)
- Ensure that gradients are passed through lazily-evaluated kernels (1518)
- Fix bugs for models with batched fantasy observations (1529, 1499)
- Correct default `latent_dim` value for LMC variational models (1512)

New features
- Create `gpytorch.utils.grid.ScaleToBounds` utility to replace `gpytorch.utils.grid.scale_to_bounds` method (1566)
- Fix skip connections in Deep GP example (1531)
- Add fantasy point support for structured kernel interpolation models (1545)

Documentation
- Add default values to all gpytorch.settings (1564)
- Improve Hadamard multitask notebook (1537)

Performance
- Speed up SGPR models (1517, 1528)

1.4.0

This release includes many major speed improvements, especially to Kronecker-factorized multi-output models.

Performance improvements
- Major speed improvements for Kronecker product multitask models (1355, 1430, 1440, 1469, 1477)
- Unwhitened VI speed improvements (1487)
- SGPR speed improvements (1493)
- Large scale exact GP speed improvements (1495)
- Random Fourier feature speed improvements (1446, 1493)

New Features 
- Dirichlet Classification likelihood (1484) - based on Milios et al. (NeurIPS 2018)
- MultivariateNormal objects have a `base_sample_shape` attribute for low-rank/degenerate distributions (1502)

New documentation
- Tutorial for designing your own kernels (1421)

Debugging utilities
- Better naming conventions for AdditiveKernel and ProductKernel (1488)
- `gpytorch.settings.verbose_linalg` context manager for seeing what linalg routines are run (1489)
- Unit test improvements (1430, 1437)

Bug Fixes
- `inverse_transform` is applied to the initial values of constraints (1482)
- `psd_safe_cholesky` obeys cholesky_jitter settings (1476)
- fix scaling issue with priors on variational models (1485)

Breaking changes
- `MultitaskGaussianLikelihoodKronecker` (deprecated) is fully incorporated in `MultitaskGaussianLikelihood` (1471)

1.3.1

Fixes 
- Spectral mixture kernels work with SKI (1392) 
- Natural gradient descent is compatible with batch-mode GPs (1416) 
- Fix prior mean in whitened SVGP (1427) 
- RBFKernelGrad has no more in-place operations (1389) 
- Fixes to ConstantDiagLazyTensor (1381, 1385)

Documentation
- Include example notebook for multitask Deep GPs (1410) 
- Documentation updates (1408, 1434, 1385, 1393)

Performance
- KroneckerProductLazyTensors use root decompositions of children (1394)
- SGPR now uses Woodbury formula and matrix determinant lemma (1356)

Other
- Delta distributions have an `arg_constraints` attribute (1422) 
- Cholesky factorization now takes optional diagonal noise argument (1377)

1.3.0

This release primarily focuses on performance improvements, and adds contour integral quadrature based variational models.

Major Features 

Variational models with contour integral quadrature
- Add an MVM-based approach to whitened variatiational inference (1372)
- This is based on the work in [Fast Matrix Square Roots with Applications to Gaussian Processes and Bayesian Optimization](https://arxiv.org/abs/2006.11267)

Minor Features

Performance improvements
- Kronecker product models compute a deterministic logdet (faster than the Lanczos-based logdet) (1332)
- Improve efficiency of `KroneckerProductLazyTensor` symeig method (1338)
- Improve SGPR efficiency (1356)

Other improvements
- `SpectralMixtureKernel` accepts arbitrary batch shapes (1350)
- Variational models pass around arbitrary `**kwargs` to the `forward` method (1339)
- `gpytorch.settings` context managers keep track of their default value (1347)
- Kernel objects can be pickle-d (1336)

Bug Fixes
- Fix `requires_grad` checks in `gpytorch.inv_matmul` (1322) 
- Fix reshaping bug for batch independent multi-output GPs (1368)
- `ZeroMean` accepts a `batch_shape` argument (1371)
- Various doc fixes/improvements (1327, 1343, 1315, 1373)

1.2.1

This release includes the following fixes:

- Fix caching issues with variational GPs (1274, 1311)
- Ensure that constraint bounds are properly cast to floating point types (1307)
- Fix bug with broadcasting multitask multivariate normal shapes (1312)
- Bypass KeOps for small/rectangular kernels (1319)
- Fix issues with `eigenvectors=False` in LazyTensorsymeig (1283)
- Fix issues with fixed-noise LazyTensor preconditioner (1299)
- Doc fixes (1275, 1301)

1.2.0

Major Features

New variational and approximate models
This release features a number of new and added features for approximate GP models:

- Linear model of coregionalization for variational multitask GPs (1180) 
- Deep Sigma Point Process models (1193)
- Mean-field decoupled (MFD) models from "Parametric Gaussian Process Regressors" (Jankowiak et al., 2020) (1179) 
- Implement natural gradient descent (1258)
- Additional non-conjugate likelihoods (Beta, StudentT, Laplace) (1211)

New kernels
We have just added a number of new specialty kernels:

- `gpytorch.kernels.GaussianSymmetrizedKLKernel` for performing regression with uncertain inputs (1186)
- `gpytorch.kernels.RFFKernel` (random Fourier features kernel) (1172, 1233)
- `gpytorch.kernels.SpectralDeltaKernel` (a parametric kernel for patterns/extrapolation) (1231)

More scalable sampling
- Large-scale sampling with contour integral quadrature from Pleiss et al., 2020 (1194)

Minor features
- Ability to set amount of jitter added when performing Cholesky factorizations (1136)
- Improve scalability of KroneckerProductLazyTensor (1199, 1208)
- Improve speed of preconditioner (1224)
- Add symeig and svd methods to LazyTensors (1105)
- Add TriangularLazyTensor for Cholesky methods (1102)

Bug fixes
- Fix initialization code for `gpytorch.kernels.SpectralMixtureKernel` (1171)
- Fix bugs with LazyTensor addition (1174)
- Fix issue with loading smoothed box priors (1195) 
- Throw warning when variances are not positive, check for valid correlation matrices (1237, 1241, 1245) 
- Fix sampling issues with Pyro integration (1238)

1.1.1

Major features

- GPyTorch is compatible with PyTorch 1.5 (latest release)
- Several bugs with task-independent multitask models are fixed (1110)
- Task-dependent multitask models are more batch-mode compatible (1087, 1089, 1095)

Minor features

- `gpytorch.priors.MultivariateNormalPrior` has an expand method (1018)
- Better broadcasting for batched inducing point models (1047)
- `LazyTensor` repeating works with rectangular matrices (1068)
- `gpytorch.kernels.ScaleKernel` inherits the `active_dims` property from its base kernel (1072)
- Fully-bayesian models can be saved (1076)

Bug Fixes

- `gpytorch.kernels.PeriodicKernel` is batch-mode compatible (1012)
- Fix `gpytorch.priors.MultivariateNormalPrior` expand method (1018)
- Fix indexing issues with `LazyTensors` (1029)
- Fix constants with `gpytorch.mlls.GammaRobustVariationalELBO` (1038, 1053)
- Prevent doubly-computing derivatives of kernel inputs (1042)
- Fix initialization issues with `gpytorch.kernels.SpectralMixtureKernel` (1052)
- Fix stability of `gpytorch.variational.DeltaVariationalStrategy`
Links

@pyup-bot pyup-bot mentioned this pull request Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant