Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation #24

Merged
merged 12 commits into from
Sep 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 37 additions & 0 deletions .github/workflows/Documentation.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
name: Documentation

on:
push:
branches:
- main
- 'release-'
tags: '*'
pull_request:

jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@latest
with:
version: '1'
- name: Install dependencies
run: |
import Pkg
Pkg.Registry.update()
Pkg.develop(Pkg.PackageSpec(path=pwd()))
Pkg.instantiate()
shell: julia --color=yes --project=docs/ {0}
- name: Build and deploy
env:
JULIA_DEBUG: "Documenter"
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # For authentication with GitHub Actions token
DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }} # For authentication with SSH deploy key
run: julia --project=docs/ --code-coverage=user --color=yes docs/make.jl
- uses: julia-actions/julia-processcoverage@v1
- uses: codecov/codecov-action@v4
with:
file: lcov.info
token: ${{ secrets.CODECOV_TOKEN }}
fail_ci_if_error: true
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,4 @@ Manifest-v*.toml
.vscode
wip
examples
docs/build
15 changes: 15 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
[deps]
CairoMakie = "13f3f980-e62b-5c42-98c6-ff1f3baf88f0"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Lux = "b2108857-7c20-44ae-9111-449ecde12c47"
NeuralOperators = "ea5c82af-86e5-48da-8ee1-382d6ad7af4b"
Optimisers = "3bd65402-5787-11e9-1adc-39752487f4e2"
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"

[compat]
CairoMakie = "0.12.11"
Documenter = "1.7.0"
Lux = "1"
NeuralOperators = "1"
Optimisers = "0.3.3"
Zygote = "0.6.71"
23 changes: 23 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
using Documenter, NeuralOperators

cp("./docs/Manifest.toml", "./docs/src/assets/Manifest.toml"; force=true)
cp("./docs/Project.toml", "./docs/src/assets/Project.toml"; force=true)

ENV["GKSwstype"] = "100"
ENV["DATADEPS_ALWAYS_ACCEPT"] = true

include("pages.jl")

makedocs(;
sitename="NeuralOperators.jl",
clean=true,
doctest=false,
linkcheck=true,
modules=[NeuralOperators],
format=Documenter.HTML(;
assets=["assets/favicon.ico"],
canonical="https://luxdl.github.io/NeuralOperators.jl/"),
pages
)

deploydocs(; repo="github.com/LuxDL/NeuralOperators.jl.git", push_preview=true)
9 changes: 9 additions & 0 deletions docs/pages.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
pages = [
"NeuralOperators.jl" => "index.md",
"Tutorials" => [
"FNO" => "tutorials/fno.md",
"DeepONet" => "tutorials/deeponet.md",
"NOMAD" => "tutorials/nomad.md"
],
"API Reference" => "api.md"
]
24 changes: 24 additions & 0 deletions docs/src/api.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# API Reference

## Pre-Built Architectures

```@docs
NOMAD
DeepONet
FourierNeuralOperator
```

## Building blocks

```@docs
OperatorConv
SpectralConv
OperatorKernel
SpectralKernel
```

## Transform API

```@docs
NeuralOperators.AbstractTransform
```
6 changes: 6 additions & 0 deletions docs/src/assets/Project.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[deps]
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
NeuralOperators = "ea5c82af-86e5-48da-8ee1-382d6ad7af4b"
Optimisers = "3bd65402-5787-11e9-1adc-39752487f4e2"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"
Binary file added docs/src/assets/favicon.ico
Binary file not shown.
Binary file added docs/src/assets/logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
83 changes: 83 additions & 0 deletions docs/src/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# NeuralOperators

`NeuralOperators.jl` is a package written in Julia to provide the architectures for learning
mapping between function spaces, and learning grid invariant solution of PDEs.

## Installation

On Julia 1.10+, you can install `NeuralOperators.jl` by running

```julia
import Pkg
Pkg.add("NeuralOperators")
```

Currently provided operator architectures are :

- [Fourier Neural Operators (FNOs)](tutorials/fno.md)
- [DeepONets](tutorials/deeponet.md)
- [Nonlinear Manifold Decoders for Operator Learning (NOMADs)](tutorials/nomad.md)

## Reproducibility

```@raw html
<details><summary>The documentation of this SciML package was built using these direct dependencies,</summary>
```

```@example
using Pkg # hide
Pkg.status() # hide
```

```@raw html
</details>
```

```@raw html
<details><summary>and using this machine and Julia version.</summary>
```

```@example
using InteractiveUtils # hide
versioninfo() # hide
```

```@raw html
</details>
```

```@raw html
<details><summary>A more complete overview of all dependencies and their versions is also provided.</summary>
```

```@example
using Pkg # hide
Pkg.status(; mode=PKGMODE_MANIFEST) # hide
```

```@raw html
</details>
```

```@eval
using TOML
using Markdown
version = TOML.parse(read("../../Project.toml", String))["version"]
name = TOML.parse(read("../../Project.toml", String))["name"]
link_manifest = "https://github.com/SciML/" *
name *
".jl/tree/gh-pages/v" *
version *
"/assets/Manifest.toml"
link_project = "https://github.com/SciML/" *
name *
".jl/tree/gh-pages/v" *
version *
"/assets/Project.toml"
Markdown.parse("""You can also download the
[manifest]($link_manifest)
file and the
[project]($link_project)
file.
""")
```
83 changes: 83 additions & 0 deletions docs/src/tutorials/deeponet.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# DeepONets

DeepONets are another class of networks that learn the mapping between two function spaces
by encoding the input function space and the location of the output space. The latent code
of the input space is then projected on the location laten code to give the output. This
allows the network to learn the mapping between two functions defined on different spaces.

```math
\begin{align*}
u(y) \xrightarrow{\text{branch}} & \; b \\
& \quad \searrow\\
&\quad \quad \mathcal{G}_{\theta} u(y) = \sum_k b_k t_k \\
& \quad \nearrow \\
y \; \; \xrightarrow{\text{trunk}} \; \; & t
\end{align*}
```

## Usage

Let's try to learn the anti-derivative operator for

```math
u(x) = sin(\alpha x)
```

That is, we want to learn

```math
\mathcal{G} : u \rightarrow v \\
```

such that

```math
v(x) = \frac{du}{dx} \quad \forall \; x \in [0, 2\pi], \; \alpha \in [0.5, 1]
```

### Copy-pastable code

```@example deeponet_tutorial
using NeuralOperators, Lux, Random, Optimisers, Zygote, CairoMakie

rng = Random.default_rng()

eval_points = 1
data_size = 64
dim_y = 1
m = 32

xrange = range(0, 2π; length=m) .|> Float32
u_data = zeros(Float32, m, data_size)
α = 0.5f0 .+ 0.5f0 .* rand(Float32, data_size)

y_data = rand(Float32, 1, eval_points, data_size) .* 2π
v_data = zeros(Float32, eval_points, data_size)
for i in 1:data_size
u_data[:, i] .= sin.(α[i] .* xrange)
v_data[:, i] .= -inv(α[i]) .* cos.(α[i] .* y_data[1, :, i])
end

deeponet = DeepONet(
Chain(Dense(m => 8, σ), Dense(8 => 8, σ), Dense(8 => 8, σ)),
Chain(Dense(1 => 4, σ), Dense(4 => 8, σ))
)

ps, st = Lux.setup(rng, deeponet)
data = [((u_data, y_data), v_data)]

function train!(model, ps, st, data; epochs=10)
losses = []
tstate = Training.TrainState(model, ps, st, Adam(0.001f0))
for _ in 1:epochs, (x, y) in data
_, loss, _, tstate = Training.single_train_step!(AutoZygote(), MSELoss(), (x, y),
tstate)
push!(losses, loss)
end
return losses
end

losses = train!(deeponet, ps, st, data; epochs=1000)

lines(losses)
```
89 changes: 89 additions & 0 deletions docs/src/tutorials/fno.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
# Fourier Neural Operators (FNOs)

FNOs are a subclass of Neural Operators that learn the learn the kernel $\Kappa_{\theta}$,
parameterized on $\theta$ between function spaces:

```math
(\Kappa_{\theta}u)(x) = \int_D \kappa_{\theta}(a(x), a(y), x, y) dy \quad \forall x \in D
```

The kernel makes up a block $v_t(x)$ which passes the information to the next block as:

```math
v^{(t+1)}(x) = \sigma((W^{(t)}v^{(t)} + \Kappa^{(t)}v^{(t)})(x))
```

FNOs choose a specific kernel $\kappa(x,y) = \kappa(x-y)$, converting the kernel into a
convolution operation, which can be efficiently computed in the fourier domain.

```math
\begin{align*}
(\Kappa_{\theta}u)(x)
&= \int_D \kappa_{\theta}(x - y) dy \quad \forall x \in D\\
&= \mathcal{F}^{-1}(\mathcal{F}(\kappa_{\theta}) \mathcal{F}(u))(x) \quad \forall x \in D
\end{align*}
```

where $\mathcal{F}$ denotes the fourier transform. Usually, not all the modes in the
frequency domain are used with the higher modes often being truncated.

## Usage

Let's try to learn the anti-derivative operator for

```math
u(x) = sin(\alpha x)
```

That is, we want to learn

```math
\mathcal{G} : u \rightarrow v \\
```

such that

```math
v(x) = \frac{du}{dx} \quad \forall \; x \in [0, 2\pi], \; \alpha \in [0.5, 1]
```

### Copy-pastable code

```@example fno_tutorial
using NeuralOperators, Lux, Random, Optimisers, Zygote, CairoMakie

rng = Random.default_rng()

data_size = 128
m = 32

xrange = range(0, 2π; length=m) .|> Float32;
u_data = zeros(Float32, m, 1, data_size);
α = 0.5f0 .+ 0.5f0 .* rand(Float32, data_size);
v_data = zeros(Float32, m, 1, data_size);

for i in 1:data_size
u_data[:, 1, i] .= sin.(α[i] .* xrange)
v_data[:, 1, i] .= -inv(α[i]) .* cos.(α[i] .* xrange)
end

fno = FourierNeuralOperator(gelu; chs=(1, 64, 64, 128, 1), modes=(16,), permuted=Val(true))

ps, st = Lux.setup(rng, fno);
data = [(u_data, v_data)];

function train!(model, ps, st, data; epochs=10)
losses = []
tstate = Training.TrainState(model, ps, st, Adam(0.01f0))
for _ in 1:epochs, (x, y) in data
_, loss, _, tstate = Training.single_train_step!(AutoZygote(), MSELoss(), (x, y),
tstate)
push!(losses, loss)
end
return losses
end

losses = train!(fno, ps, st, data; epochs=100)

lines(losses)
```
Loading
Loading