Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deeponet #5

Merged
merged 20 commits into from
Jun 17, 2024
Merged

Deeponet #5

merged 20 commits into from
Jun 17, 2024

Conversation

ayushinav
Copy link
Contributor

Test Summary:         | Pass  Error  Total     Time
LuxNeuralOperators.jl |   43      1     44  5m07.3s
  Layers              |   32            32  2m49.3s
  FNO                 |    8             8  2m14.4s
  DeepONet            |    3      1      4     3.6s
    DeepONet: CPU     |    3             3     1.1s
    DeepONet: CUDA    |           1      1     2.3s

The failing test has the following main lines in the error message.
ERROR: Method overwriting is not permitted during Module precompilation. Use ``__precompile__(false)`` to opt-out of precompilation.

and ArgumentError: Illegal conversion of a CUDA.DeviceMemory to a Ptr{Float32}.
Wasn't completely sure about this and thought about putting it here.

The first DeepONet function provides the feature compatibility with NeuralOperatorrs.jl and provides the more commonly used Dense layers to construct the branch and the trunk nets. Its documentation can be accessed by help?> DeepONet(;) or any other matching dispatch.
The second function connects any two architectures provided they have compatible last layers. Its documentation pops up when someone types in help?> DeepONet.
Also, had to do a small workaround to get the printing of architectures better.
We can make tutorials for both implementations. Putting the ideas down here until we make docs and tutorials and before we merge them into NeuralOperators.jl.

src/deeponet.jl Outdated Show resolved Hide resolved
src/deeponet.jl Outdated Show resolved Hide resolved
src/deeponet.jl Outdated Show resolved Hide resolved
@avik-pal
Copy link
Member

Also cc @ChrisRackauckas

src/deeponet.jl Outdated Show resolved Hide resolved
src/deeponet.jl Outdated Show resolved Hide resolved
@avik-pal
Copy link
Member

avik-pal commented Jun 11, 2024

Also can you open a PR setting up CI workflows. Copy the ones over from https://github.com/LuxDL/Boltz.jl/. Don't have to setup the documentation workflows, we will put the documentation of the package in the main lux repo https://lux.csail.mit.edu/stable/api/Domain_Specific_Modeling/Boltz

@avik-pal
Copy link
Member

Rebase I have added CI (CPU, CUDA, and AMDGPU).

@ayushinav
Copy link
Contributor Author

Rebase I have added CI (CPU, CUDA, and AMDGPU).

Thanks, that helps!

Copy link

codecov bot commented Jun 14, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.53%. Comparing base (5b30d54) to head (ebf9e87).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main       #5      +/-   ##
==========================================
+ Coverage   97.05%   97.53%   +0.47%     
==========================================
  Files           7        8       +1     
  Lines          68       81      +13     
==========================================
+ Hits           66       79      +13     
  Misses          2        2              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ayushinav
Copy link
Contributor Author

Looks like it's all good? @avik-pal

@avik-pal
Copy link
Member

Format check is failing, run the formatter and it should be good.

@avik-pal avik-pal merged commit 67d9007 into SciML:main Jun 17, 2024
6 checks passed
@ayushinav ayushinav deleted the deeponet branch June 21, 2024 02:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants