Skip to content

Releases: helmholtz-analytics/heat

Heat 1.6.0 - More Decompositions, Larger Buffers and Apple MPS Support

03 Sep 11:35
83727cb
Compare
Choose a tag to compare

Heat 1.6.0 Release Notes


Overview

With Heat 1.6.0 we release the next major set of features, including continued developments within the ESAPCA project funded by the European Space Agency (ESA).

The main focus of this release is a significant expansion of our distributed linear algebra capabilities, including full SVD, symmetric eigenvalue decomposition, and polar decomposition, all leveraging the efficient "Zolotarev approach". We also introduce Dynamic Mode Decomposition (DMD and DMDc) for the analysis of complex systems.

On the performance side, the MPI communication layer has been enhanced to support buffer sizes exceeding the previous 2³¹-1 element limit, enabling data transfers at an unprecedented scale.

This release also introduces support for the Zarr data format for I/O operations and experimental hardware acceleration on Apple Silicon via Metal Performance Shaders (MPS). Finally, the project's build system has been modernized to use pyproject.toml, improving its maintainability and alignment with current Python packaging standards.

With this release, Heat drops support for Python 3.9, now requiring Python 3.10 or newer, and extends compatibility to include PyTorch versions up to 2.7.x.

We are grateful to our community of users, students, open-source contributors, the European Space Agency, and the Helmholtz Association for their support and feedback.

Highlights

  • [ESAPCA] Symmetric Eigenvalue Decomposition (ht.linalg.eigh) and full SVD (ht.linalg.svd) via Zolotarev Polar Decomposition (by @mrfh92)
  • [ESAPCA] Dynamic Mode Decomposition with and without control: ht.decomposition.DMD, ht.decomposition.DMDc (by @mrfh92)
  • Support for communicating MPI buffers larger than 2³¹-1 elements (by @JuanPedroGHM)
  • I/O support for the Zarr data format: ht.load_zarr, ht.save_zarr (by @Berkant03)
  • Expanded QR decomposition for non tall-skinny matrices (by @mrfh92)
  • Support for Apple MPS hardware acceleration (by @ClaudiaComito)
  • Strided 1D convolution (by @lolacaro)

Linear Algebra & Decomposition

#1538 New decomposition module and PCA interface (by @mrfh92)
#1561 Distributed randomized SVD (by @mrfh92)
#1629 Incremental SVD/PCA (by @mrfh92)
#1639 Dynamic Mode Decomposition (DMD) (by @mrfh92)
#1744 QR decomposition for non tall-skinny matrices and split=0 (by @mrfh92)
#1697 Polar decomposition (by @mrfh92)
#1794 Dynamic Mode Decomposition with Control (DMDc) (by @mrfh92)
#1824 Symmetric Eigenvalue Decomposition (ht.linalg.eigh) and full SVD (ht.linalg.svd) based on Zolotarev Polar Decomposition (by @mrfh92)

Signal Processing

#1865 Add stride argument for ht.signal.convolve (by @lolacaro)

I/O

#1753 Added slice argument for ht.load_hdf5 (by @JuanPedroGHM)
#1766 Support for the zarr data format (by @Berkant03)

Core & MPI

#1765 Large data counts support for MPI Communication (by @JuanPedroGHM)

Other New Features

#1129 Support Apple MPS acceleration (by @ClaudiaComito)
#1773 ht.eq, ht.ne now allow non-array operands (by @Marc-Jindra)
#1888 Expand NumPy functions to DNDarrays (by @mtar)
#1895 Extends torch functions to DNDarrays (by @mtar)

Bug Fixes

#1646 Raise Error for batched vector inputs on ht.linalg.matmul (by @FOsterfeld)
#993 Fixed precision loss in several functions when dtype is float64 (by @neosunhan)
#1756 Fix printing of non-distributed data (by @ClaudiaComito)
#1831 Remove unnecessary contiguous() calls (by @Marc-Jindra)
#1893 Bug-fixes during ESAPCA benchmarking (by @mrfh92)
#1880 Exit installation if conda environment cannot be activated (by @thawn)
#1905 Resolve bug in rSVD / wrong citation in polar.py (by @mrfh92)
#1921 Fix IO test failures with Zarr v3.0.9 in ht.save_zarr() (by @LScheib)

Interoperability & Build System

#1826 Make unit tests compatible with NumPy 2.x (by @Marc-Jindra)
#1832 Transition to pyproject.toml, Ruff, and mypy (by @JuanPedroGHM)

Contributors

@Berkant03, @ClaudiaComito, @FOsterfeld, @joernhees, @jolemse, @JuanPedroGHM, @lolacaro, @LScheib, @Marc-Jindra, @mrfh92, @mtar, @neosunhan, and @thawn.

Acknowledgements and Disclaimer

The SVD, PCA, and DMD functionalities were funded by the European Space Agency (ESA) under the ESAPCA programme.

This work is partially carried out under a programme of, and funded by, the European Space Agency. Any view expressed in this repository or related publications can in no way be taken to reflect the official opinion of the European Space Agency.

Heat v1.5.1 - Support for torch 2.6 and bug fixes

17 Feb 15:04
2008688
Compare
Choose a tag to compare

Changes

Compatibility

Bug Fixes

Contributors

@ClaudiaComito, @JuanPedroGHM, @joernhees, @mrfh92, and @mtar

Heat 1.5 Release: distributed matrix factorization and more

28 Oct 12:31
7e15ad2
Compare
Choose a tag to compare

Heat 1.5 Release Notes


Overview

With Heat 1.5 we release the first set of features developed within the ESAPCA project funded by the European Space Agency (ESA).

The main focus of this release is on distributed linear algebra operations, such as tall-skinny SVD, batch matrix multiplication, and triangular solver. We also introduce vectorization via vmap across MPI processes, and batch-parallel random number generation as default for distributed operations.

This release also includes a new class for distributed Compressed Sparse Column matrices, paving the way for future implementation of distributed sparse matrix multiplication.

On the performance side, our new array redistribution via MPI Custom Datatypes provides significant speed-up in operations that require it, such as FFTs (see Dalcin et al., 2018).

We are grateful to our community of users, students, open-source contributors, the European Space Agency and the Helmholtz Association for their support and feedback.

Highlights

  • [ESAPCA] Distributed tall-skinny SVD: ht.linalg.svd (by @mrfh92)
  • Distributed batch matrix multiplication: ht.linalg.matmul (by @FOsterfeld)
  • Distributed solver for triangular systems: ht.linalg.solve_triangular (by @FOsterfeld)
  • Vectorization across MPI processes: ht.vmap (by @mrfh92)

Other Changes

Performance Improvements

  • #1493 Redistribution speed-up via MPI Custom Datatypes available by default in ht.resplit (by @JuanPedroGHM)

Sparse

  • #1377 New class: Distributed Compressed Sparse Column Matrix ht.sparse.DCSC_matrix() (by @Mystic-Slice)

Signal Processing

RNG

  • #1508 Introduce batch-parallel RNG as default for distributed operations (by @mrfh92)

Statistics

  • #1420
    Support sketched percentile/median for large datasets with ht.percentile(sketched=True) (and ht.median) (by @mrhf92)
  • #1510 Support multiple axes for distributed ht.percentile and ht.median (by @ClaudiaComito)

Manipulations

I/O

  • #1602 Improve load balancing when loading .npy files from path (by @REISII)
  • #1551 Improve load balancing when loading .csv files from path (by @REISII)

Machine Learning

  • #1593 Improved batch-parallel clustering ht.cluster.BatchParallelKMeans and ht.cluster.BatchParallelKMedians (by @mrfh92)

Deep Learning

Other Updates

Contributors

@mrfh92, @FOsterfeld, @JuanPedroGHM, @Mystic-Slice, @ClaudiaComito, @REISII, @mtar and @krajsek

Heat 1.5.0-rc1: Pre-Release

10 Sep 13:33
0ddb7f6
Compare
Choose a tag to compare
Pre-release

Changes

Cluster

Data

IO

  • #1602 Improved load balancing when loading .npy files from path. (by @REISII)
  • #1551 Improved load balancing when loading .csv files from path. (by @REISII)

Linear Algebra

Manipulations

Random

  • #1508 Introduce Batchparallel for RNG as default. (by @mrfh92)

Signal

Statistics

Sparse

Other

Contributors

@ClaudiaComito, @FOsterfeld, @JuanPedroGHM, @REISII, @mrfh92, @mtar and @krajsek

Heat 1.4.2 - Maintenance Release

12 Jul 11:06
421da62
Compare
Choose a tag to compare

Changes

Interoperability

Contributors

@ClaudiaComito, @mrfh92 and @mtar

Heat 1.4.1: Bug fix release

13 May 11:27
6112775
Compare
Choose a tag to compare

Changes

Bug fixes

Maintenance

  • #1441 added names of non-core members in citation file (by @mrfh92)

Contributors

@ClaudiaComito and @mrfh92

Interactive HPC tutorials, distributed FFT, batch-parallel clustering, support PyTorch 2.2.2

18 Apr 08:50
Compare
Choose a tag to compare

Changes

Documentation

  • #1406 New tutorials for interactive parallel mode for both HPC and local usage (by @ClaudiaComito)

🔥 Features

Bug fixes

  • #1363 ht.array constructor respects implicit torch device when copy is set to false (by @JuanPedroGHM)
  • #1216 Avoid unnecessary gathering of distributed operand (by @samadpls)
  • #1329 Refactoring of QR: stabilized Gram-Schmidt for split=1 and TS-QR for split=0 (by @mrfh92)

Interoperability

Contributors

@ClaudiaComito, @FOsterfeld, @JuanPedroGHM, @LScheib, @mrfh92, @mtar, @samadpls

Bug fixes, Docker documentation update

23 Nov 09:42
05325e2
Compare
Choose a tag to compare

Bug fixes

  • #1259 Bug-fix for ht.regression.Lasso() on GPU (by @mrfh92)
  • #1201 Fix ht.diff for 1-element-axis edge case (by @mtar)

Changes

Interoperability

Maintenance

Contributors

@ClaudiaComito, @JuanPedroGHM, @bhagemeier, @mrfh92 and @mtar

Scalable SVD, GSoC`22 contributions, Docker image, PyTorch 2 support, AMD GPUs acceleration

20 Jun 14:49
Compare
Choose a tag to compare

This release includes many important updates (see below). We particularly would like to thank our enthusiastic GSoC2022 / tentative GSoC2023 contributors @Mystic-Slice @neosunhan @Sai-Suraj-27 @shahpratham @AsRaNi1 @Ishaan-Chandak 🙏🏼 Thank you so much!

Highlights

Changelog

Array-API compliance / Interoperability

New Features

Bug Fixes

Benchmarking

Documentation

Linear Algebra

Contributors

@AsRaNi1, @ClaudiaComito, @Ishaan-Chandak, @JuanPedroGHM, @Mystic-Slice, @Sai-Suraj-27, @bhagemeier, @coquelin77, @mrfh92, @mtar, @neosunhan, @shahpratham

Bug fixes, support OpenMPI>=4.1.2, support PyTorch 1.13.1

19 Jan 08:51
Compare
Choose a tag to compare

Changes

Communication

Contributors

@ClaudiaComito, @JuanPedroGHM