Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use OrdinaryDiffEqTsit5.jl and OrdinaryDiffEqLowStorageRK.jl #163

Merged
merged 23 commits into from
Dec 4, 2024

Conversation

JoshuaLampert
Copy link
Owner

@JoshuaLampert JoshuaLampert commented Dec 2, 2024

Since we only use Tsit5 and RDPK3SpFSAL35 in the examples, I thought it would be nice to also only use OrdinaryDiffEqTsit5.jl and OrdinaryDiffEqLowStorageRK.jl in the examples and recommend installing OrdinaryDiffEqTsit5.jl in the README and docs. This should significantly reduce precompile time and generally make things smoother and more lightweight. What do you think, @ranocha?

@JoshuaLampert JoshuaLampert marked this pull request as draft December 2, 2024 16:35
@JoshuaLampert JoshuaLampert changed the title Use OrdinaryDiffEqTsit5.jl everywhere Use OrdinaryDiffEqTsit5.jl and OrdinaryDiffEqLowStorageRK.jl Dec 2, 2024
@ranocha
Copy link
Collaborator

ranocha commented Dec 2, 2024

Sounds good to me! However, we need to wait for Trixi.jl to upgrade SciMLBase.jl etc. for this to work properly, don't we?

@JoshuaLampert
Copy link
Owner Author

JoshuaLampert commented Dec 2, 2024

Sounds good to me! However, we need to wait for Trixi.jl to upgrade SciMLBase.jl etc. for this to work properly, don't we?

We only need that if we want to have Trixi.jl and DispersiveShallowWater.jl in the same project. But this currently fails already with the newest version of DispersiveShallowWater.jl, i.e. installing Trixi.jl in a project, where DispersiveShallowWater.jl is installed will downgrade DispersiveShallowWater.jl to an old version (something like v0.3... if I remember correctly) because the newer DispersiveShallowWater.jl version require versions of RecursiveArrayTools.jl, which are not compatible with Trixi.jl, cf. trixi-framework/Trixi.jl#2150.

Copy link

codecov bot commented Dec 2, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.97%. Comparing base (7db41d9) to head (ceb1610).
Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #163   +/-   ##
=======================================
  Coverage   97.97%   97.97%           
=======================================
  Files          19       19           
  Lines        1776     1776           
=======================================
  Hits         1740     1740           
  Misses         36       36           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@JoshuaLampert
Copy link
Owner Author

JoshuaLampert commented Dec 2, 2024

However, we would need to stop supporting Julia v1.9, I guess. And we would need to bump the compat bound for DiffEqBase.jl to v6.152.2.

@ranocha
Copy link
Collaborator

ranocha commented Dec 2, 2024

That's fine with me for this repo 👍

@JoshuaLampert
Copy link
Owner Author

Ok, this required more version bumps than expected. The SciML repos have pretty high compat bounds sometimes... But Downgrade is finally happy. A couple of remarks:

  • The failing benchmark action is expected because it uses the main branch as reference, but the benchmark/Project.toml from this PR. This means it still wants to use OrdinaryDiffEq.jl, but it cannot because it's not in the new Project.toml. IMHO, this can be ignored in this PR. Benchmark should not be influenced by this PR.

  • Codecov is failing probably because of some change in Julia v1.10 compared to v1.9 regarding the "lispy" tuple iteration we use in the AnalysisCallback, which means the final call to the empty tuple is not covered anymore. I already saw in previous PRs that these lines are only caught by the Julia v1.9 action, which is disabled now. But I think that's also fine.

  • Finally, as a replacement for the Julia v1.9 action I added a test using Julia v1.11. This currently fails for an allocation fix. Looks like Julia v1.11 allocates more for this example. Are you fine simply updating the allocations threshold, @ranocha or do you have an idea where this could come from?

@ranocha
Copy link
Collaborator

ranocha commented Dec 3, 2024

Ok, this required more version bumps than expected. The SciML repos have pretty high compat bounds sometimes... But Downgrade is finally happy. A couple of remarks:

  • The failing benchmark action is expected because it uses the main branch as reference, but the benchmark/Project.toml from this PR. This means it still wants to use OrdinaryDiffEq.jl, but it cannot because it's not in the new Project.toml. IMHO, this can be ignored in this PR. Benchmark should not be influenced by this PR.

👍

  • Codecov is failing probably because of some change in Julia v1.10 compared to v1.9 regarding the "lispy" tuple iteration we use in the AnalysisCallback, which means the final call to the empty tuple is not covered anymore. I already saw in previous PRs that these lines are only caught by the Julia v1.9 action, which is disabled now. But I think that's also fine.

👍

  • Finally, as a replacement for the Julia v1.9 action I added a test using Julia v1.11. This currently fails for an allocation fix. Looks like Julia v1.11 allocates more for this example. Are you fine simply updating the allocations threshold, @ranocha or do you have an idea where this could come from?

That's fine as well. Julia v1.11 introduced the Memory type so that array operations can lead to slightly more allocations tracked by Julia - but are often more efficient since the compiler can optimize the code better.

Copy link
Contributor

github-actions bot commented Dec 3, 2024

Benchmark Results

main df98f71... main/df98f71220ec66...
bbm_1d/bbm_1d_basic.jl 14 ± 0.37 μs 14 ± 0.3 μs 1
bbm_1d/bbm_1d_fourier.jl 0.285 ± 0.31 ms 0.223 ± 0.31 ms 1.28
bbm_bbm_1d/bbm_bbm_1d_basic_reflecting.jl 0.113 ± 0.0029 ms 0.114 ± 0.0019 ms 0.997
bbm_bbm_1d/bbm_bbm_1d_dg.jl 0.0341 ± 0.00057 ms 0.0343 ± 0.00052 ms 0.996
bbm_bbm_1d/bbm_bbm_1d_relaxation.jl 27.4 ± 0.43 μs 27.2 ± 0.45 μs 1.01
bbm_bbm_1d/bbm_bbm_1d_upwind_relaxation.jl 0.0484 ± 0.0005 ms 0.0487 ± 0.00047 ms 0.994
hyperbolic_serre_green_naghdi_1d/hyperbolic_serre_green_naghdi_dingemans.jl 4.09 ± 0.013 μs 4.17 ± 0.012 μs 0.982
serre_green_naghdi_1d/serre_green_naghdi_well_balanced.jl 0.199 ± 0.0055 ms 0.197 ± 0.0047 ms 1.01
svaerd_kalisch_1d/svaerd_kalisch_1d_dingemans_relaxation.jl 0.146 ± 0.0049 ms 0.148 ± 0.0031 ms 0.99
time_to_load 2.02 ± 0.019 s 2.01 ± 0.0049 s 1.01

Benchmark Plots

A plot of the benchmark results have been uploaded as an artifact to the workflow run for this PR.
Benchmark Plot

Julia v1.11 reports more allocations in some cases
@JoshuaLampert JoshuaLampert marked this pull request as ready for review December 3, 2024 15:53
@ranocha
Copy link
Collaborator

ranocha commented Dec 3, 2024

Do you have an idea what's going on with the bbm_1d/bbm_1d_fourier.jl benchmark?

@JoshuaLampert
Copy link
Owner Author

Do you have an idea what's going on with the bbm_1d/bbm_1d_fourier.jl benchmark?

No, not really, but it's faster, so I take it.

@JoshuaLampert
Copy link
Owner Author

Codecov is failing probably because of some change in Julia v1.10 compared to v1.9 regarding the "lispy" tuple iteration we use in the AnalysisCallback, which means the final call to the empty tuple is not covered anymore. I already saw in previous PRs that these lines are only caught by the Julia v1.9 action, which is disabled now. But I think that's also fine.

Looks like it's covered again by Julia v1.11 😄

@JoshuaLampert JoshuaLampert merged commit 83a753a into main Dec 4, 2024
17 of 18 checks passed
@JoshuaLampert JoshuaLampert deleted the ordinarydiffeqtsit5-jl branch December 4, 2024 09:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants