-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deeponet multi-output fix #11
Conversation
src/display.jl
Outdated
# function Base.show(io::IO, model::conv) where {conv <: OperatorConv} | ||
# # print(io, model.name*"() # "*string(Lux.parameterlength(model))*" parameters") | ||
# print(io, model.name) | ||
# end | ||
|
||
# function Base.show(io::IO, ::MIME"text/plain", model::conv) where {conv <: OperatorConv} | ||
# show(io, model.name) | ||
# end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove these, printing was fixed upstream
src/deeponet.jl
Outdated
|
||
julia> trunk_net = Chain(Dense(1 => 8), Dense(8 => 8), Dense(8 => 16)); | ||
|
||
julia> additional = Chain(Dense(1 => 4)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
input for additional layer should be size of inner embedding size
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So it not need reduction/sum/dropdim
before additional layer.
It should be additional = Chain(Dense(16 => 4));
here. Otherwise It's created a bottleneck and we lose information here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be fixed now. Using the linear
layer as additional
layer for the cases where we do not have the additional
layer did not seem ideal to me because it would imply weighted sum, where the weights would be learnt during training, but since DeepONets by default take the dot product, aka non-weighted sum, which could be required by many users.
Other than the doctests, the failing test cases were because the compiler because
The tests pass if I comment out the |
This is bad. You are returning a 3D array / 4D array based on the input sizes (which won't be type inferred). Avoid doing the |
Not sure how this might be an issue because the scalar tests also have the same
still pass the test, and Scalar II and Vector Additonal layer tests calling
fail. |
Only doc tests fail for now. |
Rebase with the latest changes to main. |
set your git config to rebase on pull instead of merge, else the commit history gets royally messed up. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #11 +/- ##
===========================================
- Coverage 100.00% 93.20% -6.80%
===========================================
Files 7 7
Lines 77 103 +26
===========================================
+ Hits 77 96 +19
- Misses 0 7 +7 ☔ View full report in Codecov by Sentry. |
Closes #9