-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Parity with NeuralOperators.jl #1
Comments
IIUC, MNOs are not really a different architecture. Its using mostly another NO, FNO being used in the work, and defining different loss functions. |
Also, the continuous form would be another good addition. |
I'm struggling a bit to implement the continuous variant and would appreciate some discussion here. The general form of NO is Coming to the kernel part Now, |
With the benchmarks looking positive should we try and get NOMAD and Markov NO? The FNO benchmarks are something I will have to handle on LuxLib end. With those done, we can take a slight feature hit for the Graph NO (though GNNLux is a thing now so we can implement it soon enough after that) and tag a release for this version of Neural Operators replacing the old one. We can move this repo over to SciML or keep it here, don't have any significant preference on my end. Thoughts @ChrisRackauckas @ayushinav . |
IIUC, Markov NO is not much of a different architecture. It uses an existing NO, usually FNO (because it's readily available and popular). Even in the pervious NeuralOperators.jl, an FNO is returned https://github.com/SciML/NeuralOperators.jl/blob/main/src/FNO/FNO.jl#L196-L212. |
The text was updated successfully, but these errors were encountered: