Skip to content

Releases: LLNL/hiop

Scalable checkpointing and restarting

30 Sep 20:15
7ccfa86
Compare
Choose a tag to compare

The salient feature of this release is support for checkpointing and restarting for the quasi-Newton solver. These features use the axom's scalable sidre data manager.

What's Changed

New Contributors

Full Changelog: v1.0.3...v1.1.0

NLP Moving limits and misc fixes

09 Feb 22:09
6161396
Compare
Choose a tag to compare

What's Changed

  • Update modules for CI tests on LLNL LC by @nychiang in #679
  • Update cmake build system to require RAJA when GPU compute mode is used by @nychiang in #676
  • Moving limits options for NLP IPM solvers by @cnpetra in #681

Full Changelog: v1.0.2...v1.0.3

v1.0.2

28 Dec 18:17
2378fde
Compare
Choose a tag to compare

What's Changed

  • Removed deprecated ALG2 for cusparseCsr2cscEx2 by @cnpetra in #671
  • Addressed fixed buffer size vulnerability for vsnprintf by @nychiang in #673
  • Removed stringent -Wall and -Werror from release builds to avoid downstream compilation errors

Full Changelog: v1.0.1...v1.0.2

C++17 compatible and misc fixes

13 Oct 20:16
c5e156c
Compare
Choose a tag to compare

What's Changed

Default C++ standard remains C++14

Full Changelog: v1.0.0...v1.0.1

Mature solvers interfaces and execution backends

08 Sep 17:10
10b7d3e
Compare
Choose a tag to compare

Notable new features

Interfaces of various solvers reached an equilibrium point after HiOp was interfaced with multiple optimization front-ends (e.g., power grid ACOPF and SC-ACOPF problems and topology optimization) both on CPUs and GPUs. The PriDec solver reached exascale on Frontier after minor communication optimizations. The quasi-Newton interior-point solver received a couple of updates that increase robustness. The Newton interior-point solver can fully operate on GPUs with select GPU linear solvers (CUSOLVER-LU and Gingko).

  • Instrumentation of RAJA sparse matrix class with execution spaces by @cnpetra in #589
  • Fix Assignment Typo in hiopMatrixSparseCsrCuda.cpp by @pate7 in #612
  • Use failure not failed in PNNL commit status posting by @cameronrutherford in #609
  • rebuild modules on quartz by @nychiang in #619
  • Use constraint violation in checkTermination by @nychiang in #617
  • MPI communication optimization by @rothpc in #613
  • fix memory leaks in inertia-free alg and condensed linsys by @nychiang in #622
  • Update IPM algorithm for the dense solver by @nychiang in #616
  • Use integer preprocessor macros for version information by @tepperly in #627
  • use compound vec in bicg IR by @nychiang in #621
  • Use bicg ir in the quasi-Newton solver by @nychiang in #620
  • Add support to MPI in C/Fortran examples by @nychiang in #633
  • Refactor CUSOLVER-LU module and interface by @pelesh in #634
  • Add MPI unit test for DenseEx4 by @nychiang in #644
  • Add more options to control NLP scaling by @nychiang in #649
  • Development of the feasibility restoration in the quasi-Newton solver by @nychiang in #647
  • GPU linear solver interface by @pelesh in #650

New Contributors

Execution spaces abstractions and misc fixes

20 Feb 21:58
d0f57c8
Compare
Choose a tag to compare

This release hosts a series of comprehensive internal developments and software re-engineering to improve the portability and performance on accelerators/GPU platforms. No changes to the user interface permeated under this release.

Notable new features

A new execution space abstraction is introduced to allow multiple hardware backends to run concurrently. The proposed design differentiates between "memory backend" and "execution policies" to allow using RAJA with Umpire-managed memory, RAJA with Cuda- or Hip-managed memory, RAJA with std memory, Cuda/Hip kernels with Cuda-/Hip- or Umpire-managed memory, etc.

  • Execution spaces: support for memory backends and execution policies by @cnpetra in #543
  • Build: Cuda without raja by @cnpetra in #579
  • Update of RAJA-based dense matrix to support runtime execution spaces by @cnpetra in #580
  • Reorganization of device namespace by @cnpetra in #582
  • RAJA Vector int with ExecSpace by @cnpetra in #583
  • Instrumentation of host vectors with execution spaces by @cnpetra in #584
  • Remove copy from/to device methods in vector classes by @cnpetra in #587
  • Add support for Raja with OpenMP into LLNL CI by @nychiang in #566

New vector classes using vendor-provided API were introduced and documentation was updated/improved

Refinement of triangular solver implementation for Ginkgo by @fritzgoebel in #585

Bug fixes

New Contributors

Misc build system fixes

21 Oct 21:44
8064ef6
Compare
Choose a tag to compare

This minor release fixes a couple of issues found in the build system after the major release 0.7 of HiOp.

What's Changed

New Contributors

Full Changelog: v0.7.0...v0.7.1

Fortran interface and misc fixes and improvements

30 Sep 18:54
5f42ab3
Compare
Choose a tag to compare
  • Fortran interface and examples
  • Bug fixing for sparse device linear solvers
  • Implementation of CUDA CSR matrices
  • Iterative refinement within CUSOLVER linear solver class
  • Improved robustness and performance of mixed dense-sparse solver for AMD/HIP

ginko integration and misc fixes

01 May 13:05
55652fb
Compare
Choose a tag to compare

This tag provides an initial integration with ginko, fixes a couple of issues, and add options for (outer) iterative refinement.

HIP linear algebra workaround and update for RAJA > v0.14

20 Apr 22:32
a9e2697
Compare
Choose a tag to compare

This version/tag provides a workaround for an issue in the HIP BLAS and updates the RAJA code to better operate with the newer versions of RAJA.