Releases: helmholtz-analytics/heat
Heat 1.6.0 - More Decompositions, Larger Buffers and Apple MPS Support
Heat 1.6.0 Release Notes
- Overview
- Highlights
- Linear Algebra & Decomposition
- Signal Processing
- I/O
- Core & MPI
- Other New Features
- Bug Fixes
- Interoperability & Build System
- Contributors
- Acknowledgements and Disclaimer
Overview
With Heat 1.6.0 we release the next major set of features, including continued developments within the ESAPCA project funded by the European Space Agency (ESA).
The main focus of this release is a significant expansion of our distributed linear algebra capabilities, including full SVD, symmetric eigenvalue decomposition, and polar decomposition, all leveraging the efficient "Zolotarev approach". We also introduce Dynamic Mode Decomposition (DMD and DMDc) for the analysis of complex systems.
On the performance side, the MPI communication layer has been enhanced to support buffer sizes exceeding the previous 2³¹-1 element limit, enabling data transfers at an unprecedented scale.
This release also introduces support for the Zarr data format for I/O operations and experimental hardware acceleration on Apple Silicon via Metal Performance Shaders (MPS). Finally, the project's build system has been modernized to use pyproject.toml
, improving its maintainability and alignment with current Python packaging standards.
With this release, Heat drops support for Python 3.9, now requiring Python 3.10 or newer, and extends compatibility to include PyTorch versions up to 2.7.x.
We are grateful to our community of users, students, open-source contributors, the European Space Agency, and the Helmholtz Association for their support and feedback.
Highlights
- [ESAPCA] Symmetric Eigenvalue Decomposition (
ht.linalg.eigh
) and full SVD (ht.linalg.svd
) via Zolotarev Polar Decomposition (by @mrfh92) - [ESAPCA] Dynamic Mode Decomposition with and without control:
ht.decomposition.DMD
,ht.decomposition.DMDc
(by @mrfh92) - Support for communicating MPI buffers larger than 2³¹-1 elements (by @JuanPedroGHM)
- I/O support for the Zarr data format:
ht.load_zarr
,ht.save_zarr
(by @Berkant03) - Expanded QR decomposition for non tall-skinny matrices (by @mrfh92)
- Support for Apple MPS hardware acceleration (by @ClaudiaComito)
- Strided 1D convolution (by @lolacaro)
Linear Algebra & Decomposition
#1538 New decomposition
module and PCA interface (by @mrfh92)
#1561 Distributed randomized SVD (by @mrfh92)
#1629 Incremental SVD/PCA (by @mrfh92)
#1639 Dynamic Mode Decomposition (DMD) (by @mrfh92)
#1744 QR decomposition for non tall-skinny matrices and split=0
(by @mrfh92)
#1697 Polar decomposition (by @mrfh92)
#1794 Dynamic Mode Decomposition with Control (DMDc) (by @mrfh92)
#1824 Symmetric Eigenvalue Decomposition (ht.linalg.eigh
) and full SVD (ht.linalg.svd
) based on Zolotarev Polar Decomposition (by @mrfh92)
Signal Processing
#1865 Add stride argument for ht.signal.convolve
(by @lolacaro)
I/O
#1753 Added slice argument for ht.load_hdf5
(by @JuanPedroGHM)
#1766 Support for the zarr data format (by @Berkant03)
Core & MPI
#1765 Large data counts support for MPI Communication (by @JuanPedroGHM)
Other New Features
#1129 Support Apple MPS acceleration (by @ClaudiaComito)
#1773 ht.eq
, ht.ne
now allow non-array operands (by @Marc-Jindra)
#1888 Expand NumPy functions to DNDarrays (by @mtar)
#1895 Extends torch functions to DNDarrays (by @mtar)
Bug Fixes
#1646 Raise Error for batched vector inputs on ht.linalg.matmul
(by @FOsterfeld)
#993 Fixed precision loss in several functions when dtype is float64 (by @neosunhan)
#1756 Fix printing of non-distributed data (by @ClaudiaComito)
#1831 Remove unnecessary contiguous()
calls (by @Marc-Jindra)
#1893 Bug-fixes during ESAPCA benchmarking (by @mrfh92)
#1880 Exit installation if conda environment cannot be activated (by @thawn)
#1905 Resolve bug in rSVD / wrong citation in polar.py (by @mrfh92)
#1921 Fix IO test failures with Zarr v3.0.9 in ht.save_zarr()
(by @LScheib)
Interoperability & Build System
#1826 Make unit tests compatible with NumPy 2.x (by @Marc-Jindra)
#1832 Transition to pyproject.toml, Ruff, and mypy (by @JuanPedroGHM)
Contributors
@Berkant03, @ClaudiaComito, @FOsterfeld, @joernhees, @jolemse, @JuanPedroGHM, @lolacaro, @LScheib, @Marc-Jindra, @mrfh92, @mtar, @neosunhan, and @thawn.
Acknowledgements and Disclaimer
The SVD, PCA, and DMD functionalities were funded by the European Space Agency (ESA) under the ESAPCA programme.
This work is partially carried out under a programme of, and funded by, the European Space Agency. Any view expressed in this repository or related publications can in no way be taken to reflect the official opinion of the European Space Agency.
Heat v1.5.1 - Support for torch 2.6 and bug fixes
Changes
Compatibility
Bug Fixes
- #1791
heat.eq
,heat.ne
now allow non-array operands (by @github-actions[bot]) - #1790 Fixed precision loss in several functions when dtype is float64 (by @github-actions[bot])
- #1764 Printing non-distributed data (by @github-actions[bot])
Contributors
@ClaudiaComito, @JuanPedroGHM, @joernhees, @mrfh92, and @mtar
Heat 1.5 Release: distributed matrix factorization and more
Heat 1.5 Release Notes
- Overview
- Highlights
- Performance Improvements
- Sparse
- Signal Processing
- RNG
- Statistics
- Manipulations
- I/O
- Machine Learning
- Deep Learning
- Other Updates
- Contributors
Overview
With Heat 1.5 we release the first set of features developed within the ESAPCA project funded by the European Space Agency (ESA).
The main focus of this release is on distributed linear algebra operations, such as tall-skinny SVD, batch matrix multiplication, and triangular solver. We also introduce vectorization via vmap
across MPI processes, and batch-parallel random number generation as default for distributed operations.
This release also includes a new class for distributed Compressed Sparse Column matrices, paving the way for future implementation of distributed sparse matrix multiplication.
On the performance side, our new array redistribution via MPI Custom Datatypes provides significant speed-up in operations that require it, such as FFTs (see Dalcin et al., 2018).
We are grateful to our community of users, students, open-source contributors, the European Space Agency and the Helmholtz Association for their support and feedback.
Highlights
- [ESAPCA] Distributed tall-skinny SVD:
ht.linalg.svd
(by @mrfh92) - Distributed batch matrix multiplication:
ht.linalg.matmul
(by @FOsterfeld) - Distributed solver for triangular systems:
ht.linalg.solve_triangular
(by @FOsterfeld) - Vectorization across MPI processes:
ht.vmap
(by @mrfh92)
Other Changes
Performance Improvements
- #1493 Redistribution speed-up via MPI Custom Datatypes available by default in
ht.resplit
(by @JuanPedroGHM)
Sparse
- #1377 New class: Distributed Compressed Sparse Column Matrix
ht.sparse.DCSC_matrix()
(by @Mystic-Slice)
Signal Processing
- #1515 Support batch 1-d convolution in
ht.signal.convolve
(by @ClaudiaComito)
RNG
Statistics
- #1420
Support sketched percentile/median for large datasets withht.percentile(sketched=True)
(andht.median
) (by @mrhf92) - #1510 Support multiple axes for distributed
ht.percentile
andht.median
(by @ClaudiaComito)
Manipulations
- #1419 Implement distributed
unfold
operation (by @FOsterfeld)
I/O
- #1602 Improve load balancing when loading .npy files from path (by @REISII)
- #1551 Improve load balancing when loading .csv files from path (by @REISII)
Machine Learning
- #1593 Improved batch-parallel clustering
ht.cluster.BatchParallelKMeans
andht.cluster.BatchParallelKMedians
(by @mrfh92)
Deep Learning
Other Updates
- #1618 Support mpi4py 4.x.x (by @JuanPedroGHM)
Contributors
@mrfh92, @FOsterfeld, @JuanPedroGHM, @Mystic-Slice, @ClaudiaComito, @REISII, @mtar and @krajsek
Heat 1.5.0-rc1: Pre-Release
Changes
Cluster
Data
IO
- #1602 Improved load balancing when loading .npy files from path. (by @REISII)
- #1551 Improved load balancing when loading .csv files from path. (by @REISII)
Linear Algebra
- #1261 Batched matrix multiplication. (by @FOsterfeld)
- #1504 Add solver for triangular systems. (by @FOsterfeld)
Manipulations
- #1419 Implement distributed
unfold
operation. (by @FOsterfeld)
Random
Signal
- #1515 Support batch 1-d convolution in
ht.signal.convolve
. (by @ClaudiaComito)
Statistics
- #1510 Support multiple axes for
ht.percentile
. (by @ClaudiaComito)
Sparse
- #1377 Distributed Compressed Sparse Column Matrix. (by @Mystic-Slice)
Other
- #1618 Support mpi4py 4.x.x (by @JuanPedroGHM)
Contributors
@ClaudiaComito, @FOsterfeld, @JuanPedroGHM, @REISII, @mrfh92, @mtar and @krajsek
Heat 1.4.2 - Maintenance Release
Changes
Interoperability
- #1467, #1525 Support PyTorch 2.3.1 (by @mtar)
- #1535 Address test failures after netCDF4 1.7.1, numpy 2 releases (by @ClaudiaComito)
Contributors
@ClaudiaComito, @mrfh92 and @mtar
Heat 1.4.1: Bug fix release
Changes
Bug fixes
- #1472 DNDarrays returned by
_like
functions default to same device as input DNDarray (by @mrfh92, @ClaudiaComito)
Maintenance
Contributors
Interactive HPC tutorials, distributed FFT, batch-parallel clustering, support PyTorch 2.2.2
Changes
Documentation
- #1406 New tutorials for interactive parallel mode for both HPC and local usage (by @ClaudiaComito)
🔥 Features
- #1288 Batch-parallel K-means and K-medians (by @mrfh92)
- #1228 Introduce in-place-operators for
arithmetics.py
(by @LScheib) - #1218 Distributed Fast Fourier Transforms (by @ClaudiaComito)
Bug fixes
- #1363
ht.array
constructor respects implicit torch device when copy is set to false (by @JuanPedroGHM) - #1216 Avoid unnecessary gathering of distributed operand (by @samadpls)
- #1329 Refactoring of QR: stabilized Gram-Schmidt for split=1 and TS-QR for split=0 (by @mrfh92)
Interoperability
- #1418 and #1290: Support PyTorch 2.2.2 (by @mtar)
- #1315 and #1337: Fix some NumPy deprecations in the core and statistics tests (by @FOsterfeld)
Contributors
@ClaudiaComito, @FOsterfeld, @JuanPedroGHM, @LScheib, @mrfh92, @mtar, @samadpls
Bug fixes, Docker documentation update
Bug fixes
- #1259 Bug-fix for
ht.regression.Lasso()
on GPU (by @mrfh92) - #1201 Fix
ht.diff
for 1-element-axis edge case (by @mtar)
Changes
Interoperability
- #1257 Docker release 1.3.x update (by @JuanPedroGHM)
Maintenance
- #1274 Update version before release (by @ClaudiaComito)
- #1267 Unit tests: Increase tolerance for
ht.allclose
onht.inv
operations for all torch versions (by @ClaudiaComito) - #1266 Sync
pre-commit
configuration withmain
branch (by @ClaudiaComito) - #1264 Fix Pytorch release tracking workflows (by @mtar)
- #1234 Update sphinx package requirements (by @mtar)
- #1187 Create configuration file for Read the Docs (by @mtar)
Contributors
@ClaudiaComito, @JuanPedroGHM, @bhagemeier, @mrfh92 and @mtar
Scalable SVD, GSoC`22 contributions, Docker image, PyTorch 2 support, AMD GPUs acceleration
This release includes many important updates (see below). We particularly would like to thank our enthusiastic GSoC2022 / tentative GSoC2023 contributors @Mystic-Slice @neosunhan @Sai-Suraj-27 @shahpratham @AsRaNi1 @Ishaan-Chandak 🙏🏼 Thank you so much!
Highlights
- #1155 Support PyTorch 2.0.1 (by @ClaudiaComito)
- #1152 Support AMD GPUs (by @mtar)
- #1126 Distributed hierarchical SVD (by @mrfh92)
- #1028 Introducing the
sparse
module: Distributed Compressed Sparse Row Matrix (by @Mystic-Slice) - Performance improvements:
- #1125 distributed
heat.reshape()
speed-up (by @ClaudiaComito) - #1141
heat.pow()
speed-up when exponent isint
(by @ClaudiaComito @coquelin77 ) - #1119
heat.array()
default tocopy=None
(e.g., only if necessary) (by @ClaudiaComito @neosunhan )
- #1125 distributed
- #970 Dockerfile and accompanying documentation (by @bhagemeier)
Changelog
Array-API compliance / Interoperability
- #1154 Introduce
DNDarray.__array__()
method for interoperability withnumpy
,xarray
(by @ClaudiaComito) - #1147 Adopt NEP29, drop support for PyTorch 1.7, Python 3.6 (by @mtar)
- #1119
ht.array()
default tocopy=None
(e.g., only if necessary) (by @ClaudiaComito) - #1020 Implement
broadcast_arrays
,broadcast_to
(by @neosunhan) - #1008 API: Rename
keepdim
kwarg tokeepdims
(by @neosunhan) - #788 Interface for DPPY interoperability (by @coquelin77 @fschlimb )
New Features
- #1126 Distributed hierarchical SVD (by @mrfh92)
- #1020 Implement
broadcast_arrays
,broadcast_to
(by @neosunhan) - #983 Signal processing: fully distributed 1D convolution (by @shahpratham)
- #1063 add eq to Device (by @mtar)
Bug Fixes
- #1141
heat.pow()
speed-up when exponent isint
(by @ClaudiaComito) - #1136 Fixed PyTorch version check in
sparse
module (by @Mystic-Slice) - #1098 Validates number of dimensions in input to
ht.sparse.sparse_csr_matrix
(by @Ishaan-Chandak) - #1095 Convolve with distributed kernel on multiple GPUs (by @shahpratham)
- #1094 Fix division precision error in
random
module (by @Mystic-Slice) - #1075 Fixed initialization of DNDarrays communicator in some routines (by @AsRaNi1)
- #1066 Verify input object type and layout + Supporting tests (by @Mystic-Slice)
- #1037 Distributed weighted
average()
along tuple of axes: shape ofweights
to match shape of input (by @Mystic-Slice)
Benchmarking
- #1137 Continous Benchmarking of runtime (by @JuanPedroGHM)
Documentation
- #1150 Refactoring for efficiency and readability (by @Sai-Suraj-27)
- #1130 Reintroduce Quick Start (by @ClaudiaComito)
- #1079 A better README file (by @Sai-Suraj-27)
Linear Algebra
Contributors
@AsRaNi1, @ClaudiaComito, @Ishaan-Chandak, @JuanPedroGHM, @Mystic-Slice, @Sai-Suraj-27, @bhagemeier, @coquelin77, @mrfh92, @mtar, @neosunhan, @shahpratham
Bug fixes, support OpenMPI>=4.1.2, support PyTorch 1.13.1
Changes
Communication
- #1058 Fix edge-case contiguity mismatch for Allgatherv (by @ClaudiaComito)